Professional Documents
Culture Documents
Rac10gR2OnLinux.............................................................................................................................................1
1. *Introduction.......................................................................................................................................1
1.1. *What you need to know....................................................................................................1
1.1.1. Software required for install...............................................................................2
1.1.2. Processor Model.................................................................................................2
1.1.3. Required RPM packages.....................................................................................2
1.1.3.1. 32-Bit Required RPM's.......................................................................3
1.1.3.2. 64-Bit Required RPM's.......................................................................4
1.1.3.3. *Oracle Enterprise Linux:..................................................................6
1.2. *Installation steps..........................................................................................................................................6
1.3. *Schematic....................................................................................................................................................7
1.3.1. Hardware/software configuration BEFORE Oracle software install............................................7
1.3.2. Hardware/software configuration AFTER Oracle software install..............................................7
1.4. *Installation Method.....................................................................................................................................8
2. *Prepare the cluster nodes for Oracle RAC.....................................................................................................9
2.1. *User Groups and Accounts............................................................................................................9
2.1.1. *Creating the OSDBA (dba) Group.................................................................................9
2.1.2. *Creating an OSOPER Group (Optional)......................................................................10
2.1.3. *Creating the Oracle Inventory Group (oinstall)...........................................................10
2.1.4. *Creating the Oracle Software Owner User..................................................................10
2.1.4.1. *Determining Whether an Oracle Software Owner User Exists...................11
2.1.4.2. *Creating an Oracle Software Owner User...................................................11
2.1.4.3. *Modifying an Oracle Software Owner User................................................11
2.1.5. *Verifying That the User nobody Exists.................................................................................................11
2.1.6. *Creating Identical Users and Groups on Other Cluster Nodes..............................................................12
2.2. *SSH Setup.................................................................................................................................................12
2.2.1. *Create RSA Keys On Each Node using the following steps:...................................................12
2.2.2. *Add All Keys to a Common authorized_keys File...................................................................13
2.2.3. *Enabling SSH User Equivalency on Cluster Member Nodes...................................................14
2.3. *Configuring the oracle's User Environment..............................................................................................15
2.4. *Network Requirements.............................................................................................................................16
2.4.1. *Network Hardware Requirements.............................................................................................16
2.4.2. *IP Address Requirements..........................................................................................................17
2.4.3. *Network Ping Tests...................................................................................................................18
2.4.4. *Network Adapter configuration................................................................................................18
2.5. *Time Sync.................................................................................................................................................18
2.6. *Configuring Kernel Parameters................................................................................................................19
2.7. *Setting Shell Limits for the oracle User....................................................................................................20
2.8. *Configuration of the Hangcheck-timer Module........................................................................................21
2.8.1. *Hangcheck-timer Module verification procedure.....................................................................21
2.9. *Platform Specific Setup............................................................................................................................22
2.10. *Required Software Directories................................................................................................................23
2.10.1. *Oracle Base Directory.............................................................................................................23
2.10.2. *Oracle Inventory Directory.....................................................................................................23
2.10.3. *Oracle Clusterware Home Directory......................................................................................23
2.10.4. *Oracle Home Directory...........................................................................................................24
2.10.5. *Identifying Existing Oracle Directories..................................................................................24
2.11. *CVU Stage Check...................................................................................................................................25
3. *Prepare the shared storage for Oracle RAC.................................................................................................26
3.1. *Create Partitions...........................................................................................................................27
3.1.1. Example of Configuring Block Device Storage for Oracle Clusterware.......................28
3.1.2. Example of Creating a Udev Permissions File for Oracle Clusterware........................28
3.1.3. Platform Specific Settings.............................................................................................29
3.2. *Installing the cvuqdisk Package for Linux................................................................................................29
3.3. *CVU..........................................................................................................................................................30
i
Table of Contents
Rac10gR2OnLinux
4. Oracle Clusterware Installation and Configuration.......................................................................................30
4.1. *CVU Pre Oracle Clusterware install check..................................................................................30
4.2. *Installing Oracle Clusterware with OUI......................................................................................30
4.2.1. Oracle Clusterware has been installed...........................................................................38
4.3. *CVU Post Oracle Clusterware install check.............................................................................................38
4.4. *Changing diagwait parameter to delay node reboot.................................................................................39
5. Oracle Clusterware patching..........................................................................................................................39
6. Install Oracle ASM Software only Home......................................................................................................43
7. Oracle ASM Software Home Patching..........................................................................................................48
8. Oracle RAC Database Listener Creation.......................................................................................................52
8.1. Create Node specific network listeners..........................................................................................52
9. Oracle ASM Instance and diskgroup Creation..............................................................................................57
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups..........................................57
10. Oracle RAC Database Home Software Install.............................................................................................66
10.1. CVU check - Pre Database Install...............................................................................................66
10.2. Oracle RAC Database Home Software Install.............................................................................67
11. Oracle RAC Software Home Patching........................................................................................................72
12. Oracle RAC Database Creation...................................................................................................................76
12.1. use dbca to create the RAC database...........................................................................................76
ii
Rac10gR2OnLinux
1. *Introduction
Clusterware Home
Starting with Oracle Database 10g Release 2 (10.2), Oracle Clusterware should be installed in a
separate Oracle Clusterware home directory. You should not install Oracle Clusterware in a
release-specific Oracle home mount point, typically /u01/app/oracle/product/10.2.0.
ASM Instance
With Oracle Database 10g Release 2 (10.2), a single ASM instance for each node is now able to serve
disk groups to all the database instances in the cluster, whether or not the database is a RAC database
or a Single Instance database. Automatic Storage Management should be installed in a separate ASM
home directory.
Virtual IP (VIP)
Oracle Database uses the VIP address to provide a secondary IP for the main network adapter for the
node. Clients connect to the VIP to gain access to the database. The purpose of the VIP is to improve
detection of node failure by clients, and facilitate failover. The VIP is not a cluster IP.
Certification
Rac10gR2OnLinux 1
Ensure that you have a certified combination of the operating system and an Oracle Database software
release by referring to the OracleMetaLink? certification information.
•
♦ Base Oracle 10gR2 (10.2.0.1) DVD
♦ 10.2.0.4 Patchset (*.zip file from OracleMetalink? )
This paper covers both 32bit and 64bit install. Please note that The OS Version of Oracle you install must
match the OS Chip Version (e.g. on OEL4 32bit - you must install the 32bit versions of the Oracle software )
For additional information on Linux Operating System support refer to Metalink Note 266043.1.
Install your Linux operating system with the default software packages (RPMs). This installation includes
most of the required packages and helps you limit manual checks of package dependencies. Do not customize
RPMs during installation.
To determine whether the required packages are installed, enter commands similar to the following:
# rpm -q package_name
♦ gcc-3.2.3-34
♦ glibc-2.3.2-95.20
♦ make-3.79.1
♦ openmotif21-2.1.30-8
♦ compat-db-4.0.14.5
♦ compat-gcc-7.3-2.96.128
♦ compat-gcc-c++-7.3-2.96.128
♦ compat-libstdc++-7.3-2.96.128
♦ compat-libstdc++-devel-7.3-2.96.128
♦ setarch-1.3-1
♦ XFree86 (Spatial only)
♦ XFree86-devel (Spatial only)
♦ binutils-2.15.92.0.2-10.EL4
♦ compat-db-4.1.25-9
♦ compat-libstdc++-296-2.96-132.7.2
♦ compat-libstdc++-33-3.2.3-47.3
♦ control-center-2.8.0-12
♦ gcc-3.4.3-9.EL4
♦ gcc-c++-3.4.3-9.EL4
♦ glibc-2.3.4-2
♦ glibc-common-2.3.4-2
♦ gnome-libs-1.4.1.2.90-44.1
♦ libstdc++-3.4.3-9.EL4
♦ libstdc++-devel-3.4.3-9.EL4
♦ make-3.80-5
♦ pdksh-5.2.14-30
♦ sysstat-5.0.5-1
♦ xscreensaver-4.18-5.rhel4.2
♦ openmotif21-2.1.30-11.RHEL4.2 (required to install Oracle demos)
♦ libaio-0.3.102-1
◊ gcc-4.1.1-52.el5.i386.rpm
◊ libstdc++-devel-4.1.1-52.el5.i386.rpm
◊ glibc-devel-2.5-12.i386.rpm
◊ glibc-headers-2.5-12.i386.rpm
◊ ibgomp-4.1.1-52.el5.i386.rpm
♦ libXp-1.0.0-8.i386.rpm
♦ compat-gcc-34-3.4.6-4.i386rpm
♦ compat-gcc-c++-34-3.4.6-4.i386rpm
♦ compat-libstdc++-33-3.2.3-61.i386.rpm
♦ sysstat-5.0.5-1.i386.rpm (OEL5 only)
• SuSE SLES9:
• SuSE SLES10:
♦ glibc-devel-2.4-31.2.i586.rpm
♦ gcc-4.1.0-28.4.i586.rpm
♦ libstdc++-devel-4.1.0-28.4.i586.rpm
♦ gcc-c++-4.1.0-28.4.i586.rpm
♦ libaio-devel-0.3.104-14.2.i586.rpm
♦ compat-db 4.0.14-5.1
♦ compat-gcc-7.3-2.96.128 (32 bit)
♦ compat-gcc-c++-7.3-2.96.128 (32 bit)
♦ compat-libstdc++-7.3-2.96.128 (32 bit)
♦ compat-libstdc++-devel-7.3-2.96.128 (32 bit)
♦ control-center-2.2.0.1-13
♦ gcc-3.2.3-47
♦ gcc-c++-3.2.3-47
♦ gdb-6.1post-1.20040607.52
♦ glibc-2.3.2-95.30
♦ glibc-common-2.3.2-95.30
♦ glibc-devel-2.3.2-95.30
♦ glibc-devel-2.3.2-95.20 (32 bit)
♦ gnome-libs-1.4.1.2.90-34.2 (32 bit)
♦ libaio-0.3.96-3
♦ libaio-devel-0.3.96-3
♦ libstdc++-3.2.3-47
♦ libstdc++-devel-3.2.3-47
♦ make-3.79.1-17
♦ openmotif-2.2.3-3.RHEL3
♦ sysstat-5.0.5-5.rhel3
♦ setarch-1.3-1
♦ compat-gcc-34-3.4.6-4
♦ compat-gcc-34-c++-3.4.6-4
♦ compat-libstdc++-33-3.2.3-61.i386.rpm
♦ compat-libstdc++-33-3.2.3-61.x86_64.rpm
♦ gcc-c++-4.1.1-52.el5.x86_64.rpm and all its dependent packages:
◊ ibstdc++-devel-4.1.1-52.el5.x86_64.rpm
◊ glibc-headers-2.5-12.x86_64.rpm
◊ glibc-devel-2.5-12.i386.rpm
◊ glibc-devel-2.5-12.x86_64.rpm
◊ ibgomp-4.1.1-52.el5.x86_64.rpm
◊ gcc-4.1.1-52.el5.x86_64.rpm
♦ libXp-1.0.0-8.i386.rpm
♦ sysstat-7.0.0-3.el5.x86_64.rpm
♦ util-linux-2.13-0.44.el5.x86_64 (for raw devices)
• SUSE SLES9:
♦ binutils-2.15.90.0.1.1-32.5
♦ db1-1.85-85.1
♦ gcc-3.3.3-43.24
♦ gcc-c++-3.3.3-43.24
♦ glibc-2.3.3-98.28
♦ glibc-32bit-9-200506071326
♦ glibc-devel-2.3.3-98-47
♦ glibc-devel-32bit-9-200506062332
♦ libaio-0.3.102.1.2
♦ libaio-devel-0.3.102.1.2
• SUSE SLES10:
♦ glibc-devel-2.4-31.2.x86_64.rpm
♦ gcc-4.1.0-28.4.x86_64.rpm
♦ libstdc++-devel-4.1.0-28.4.x86_64.rpm
♦ gcc-c++-4.1.0-28.4.x86_64.rpm
♦ glibc-devel-32bit-2.4-31.2.x86_64.rpm
♦ libaio-devel-0.3.104-14.2.x86_64.rpm
As part of the Oracle Unbreakable Linux Program, Oracle offers for download or on CD:
Oracle Enterprise Linux 4 fully compatible with Red Hat Enterprise Linux 4 AS/ES
Oracle Enterprise Linux 5 fully compatible with Red Hat Linux 5 Server and Advanced Platform.
If you install the Oracle Validated Configuration RPM, then it sets and verifies system parameters based on
recommendations from the Oracle Validated Configurations program, and installs any additional packages
needed for installing Oracle Clusterware and Oracle Database. It also updates sysctl.conf settings,
system startup parameters, user limits, and driver parameters to values that testing shows will provide better
performance.
• Note: Additional information on the Oracle Validated Configurations can be found at the following
URL:
http://www.oracle.com/technology/tech/linux/validated-configurations/index.html
• Preparation
♦ Install the Oracle Clusterware (using the push mechanism to install on the other nodes in the
cluster)
♦ Patch the Clusterware to the latest patchset
• Establish ASM
Note: Since RHEL5/OEL5 and SLES10 are not recognized by runInstaller, you will need to execute it with
the -ignoreSysPrereqs option.
1.3. *Schematic
The following is a schematic of the software & hardware layout of a 2node RAC cluster. As explained in this
document the actual number of LUN’s required will vary depending on your mirroring requirements.
• 5 of 270MB LUNs
• Block Device: The Oracle Clusterware, ASM & Oracle Database Homes binaries are established on
the local disk of each of the RAC nodes. The files required by Oracle Clusterware are on Block
devices. The database data-files are on ASM.
It should be noted that there are other installation configurations. This document does not cover these
methods:
• OCFS: The Oracle Cluster, ASM & Oracle Database Homes are established on the local disk of each
of the RAC nodes. The files required by Oracle Clusterware are on OCFS. The database data-files are
also on OCFS.
There are other possible combinations although Oracle recommends one of the above methods.
• You cannot place the Clusterware devices on directnfs. Directnfs is not covered here
or you can 'Show' or 'Hide' all by selecting here : Show all Hide all
This installation routine presumes that you have a 2-node Linux cluster. There are a number of items that
require checking before the install commences. Getting this bit right will enhance your install experience.
You must create this group the first time you install Oracle Database software on the system. This group
identifies operating system user accounts that have database administrative privileges (the SYSDBA
privilege). The default name for this group is dba.You must create an OSDBA group in the following
circumstances:
• An OSDBA group does not exist, for example, if this is the first installation of Oracle Database
software on the system
• An OSDBA group exists, but you want to give a different group of operating system users database
administrative privileges in a new Oracle installation
To determine whether the OSDBA group exists, enter the following command:
If the OSDBA group does not exist or if you require a new OSDBA group, then create it as follows. In the
# /usr/sbin/groupadd dba
This is an optional group. Create this group if you want a separate group of operating system users to have a
limited set of database administrative privileges (the SYSOPER privilege). By default, members of the
OSDBA group also have the SYSOPER privilege. The usual name chosen for this group is oper. For most
installations, it is sufficient to create only the OSDBA group.
If you require a new OSOPER group, then create it as follows. In the following command, use the group name
oper unless a group with that name already exists.
# /usr/sbin/groupadd oper
When you install Oracle software on the system for the first time, Oracle Universal Installer creates the
oraInst.loc file. This file identifies the name of the Oracle Inventory group (typically, oinstall), and the path
of the Oracle Inventory directory. If you have an existing Oracle Inventory, then ensure that you use the same
Oracle Inventory for all Oracle software installations. If you do not have an existing Oracle Inventory, then
you should create an Oracle Inventory group. To determine whether you have an Oracle Inventory on your
system, enter the following command:
# more /etc/oraInst.loc
If the oraInst.loc file exists, then the output from this command is similar to the following:
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
If the Oracle Inventory group does not exist then create it as follows:
# /usr/sbin/groupadd oinstall
You must create an Oracle software owner user in the following circumstances:
• If an Oracle software owner user does not exist, for example, if this is the first installation of Oracle
software on the system
• If an Oracle software owner user exists, but you want to use a different operating system user, with
different group membership, to give database administrative privileges to those groups in a new
Oracle Database installation.
Note:If you intend to use multiple Oracle software owners for different Oracle homes, then you should
create a separate Oracle software owner for Oracle Clusterware, and install Oracle Clusterware using
the Oracle Clusterware software owner.
To determine whether an Oracle software owner user named oracle exists, enter the following command:
# id oracle
If the oracle user exists, then the output from this command is similar to the following:
If the user exists, then determine whether you want to use the existing user or create another oracle user. If
you want to use the existing user, then ensure that the user’s primary group is the Oracle Inventory group and
that it is a member of the appropriate OSDBA and OSOPER groups.
In the following procedure, use the user name oracle unless a user with that name already exists. If the Oracle
software owner user does not exist or if you require a new Oracle software owner user, then create it as
follows:
1.To create the oracle user, enter a command similar to the following:
In this command:
• The -g option specifies the primary group, which must be the Oracle Inventory group, for example
oinstall
• The -G option specifies the secondary groups, which must include the OSDBA group and if required,
the OSOPER group.dba or dba,oper
# passwd oracle
If the oracle user exists, but its primary group is not oinstall or it is not a member of the appropriate OSDBA
or OSOPER groups, then enter a command similar to the following to modify it. Specify the primary group
using the -g option and any required secondary group using the -G option:
Verify that the unprivileged user nobody exists on the system. The nobody user must own the external jobs
(extjob) executable after the installation. 1. To determine if the user exists, enter the following command:
# id nobody
If this command displays information about the nobody user, then you do not have to create that user.
2. If the nobody user does not exist, then enter the following command to create it:
The Oracle software owner user and the Oracle Inventory, OSDBA, and OSOPER groups must exist and be
identical on all cluster nodes. To create these identical users and groups, you must identify the user ID and
group IDs assigned on the node where you created them, then create the user and groups with the same name
and ID on the other cluster nodes.
Note: If you are using users and groups defined in a directory service such as NIS, then they are already
identical on each cluster node.
$ pgrep sshd
If SSH is running, then the response to this command is one or more process ID numbers. In the home
directory of the software owner that you want to use for the installation (crs, oracle), use the command ls -al to
ensure that the .ssh directory is owned and writable only by the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while
DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA. The
instructions that follow are for SSH1. If you have an SSH2 installation, and you cannot use SSH1, then refer
to your SSH distribution documentation to configure SSH1 compatibility or configure SSH2 with DSA.
2.2.1. *Create RSA Keys On Each Node using the following steps:
1. Log in as the software owner (in this example, the oracle user).
2. To ensure that you are logged in as the Oracle user, and that the user ID matches the expected user ID you
have assigned to the Oracle user, enter the commands id and id oracle. Ensure that Oracle user group and user
and the terminal window process group and user IDs are identical. For example:
# id
uid=502(oracle) gid=501(oinstall) groups=501(oinstall),502(oracle)
# id oracle
uid=502(oracle) gid=501(oinstall) groups=501(oinstall),502(oracle)
3. If necessary, create the .ssh directory in the crs user's home directory, and set permissions on it to ensure
that only the crs user has read and write permissions:
# mkdir ~/.ssh
# chmod 700 ~/.ssh
# /usr/bin/ssh-keygen -t rsa
• Accept the default location for the key file (press Enter).
• Enter and confirm a pass phrase unique for this installation user.
This command writes the RSA public key to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa
file.
Never distribute the private keys to anyone not authorized to perform Oracle software installations.
5. Repeat steps 1 through 4 on each node that you intend to make a member of the cluster, using the RSA
keys.
1. Combine the contents of the id_rsa.pub files from each server. (You can do this all on one node)
In the first node (stnsp001 in this case) run the commands below. As the oracle user:
* Each time you connect from any node to a new hostname for the first time, you will see a message similar
to:
"The authenticity of host 'stnsp001 (10.137.8.215)' can't be established. RSA key fingerprint is ...
Are you sure you want to continue connecting (yes/no)?"
Type "yes" and press ENTER. You will then see the message:
"Warning: Permanently added 'stnsp001,10.137.8.215' (RSA) to the list of known hosts. "oracle@stnsp001's
password:"
In the .ssh directory, you should see the id_rsa.pub key that you have created, and the file authorized_keys.
Notice that this time SSH will prompt for the passphrase you used when creating the keys rather than the
oracle password. This is because the first node (stnsp001) now knows the public keys for the second node and
SSH is now using a different authentication protocol. Note, if you didn't enter a passphrase when creating the
keys with ssh-keygen, you will not be prompted for one here.
2.2.1. *Create RSA Keys On Each Node using the following steps: 13
2.2.3. *Enabling SSH User Equivalency on Cluster Member Nodes
After you have copied the authorized_keys file that contains all keys to each node in the cluster, complete the
following procedure, in the order listed. In this example, the Oracle Clusterware software owner is named crs:
1. On the system where you want to run OUI, log in as the crs user.
2. Use the following command syntax, where hostname1, hostname2, and so on, are the public hostnames
(alias and fully qualified domain name) of nodes in the cluster to run SSH from the local node to each node,
including from the local node to itself, and from each node to each other node:
For example:
# ssh stnsp001 date
The authenticity of host 'stnsp001 (xxx.xxx.8.215)' can't be established.
RSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stnsp001,xxx.xxx.8.215' (RSA) to the list of known hosts.
Enter passphrase for key '/home/crs/.ssh/id_rsa':
Mon May 19 11:08:13 PST 2008
# ssh stnsp001.oracle.com date
The authenticity of host 'stnsp00.example.com (xxx.xxx.8.215)' can't be established.
RSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stnsp001.oracle.com,xxx.xxx.8.215' (RSA) to the list of known hosts.
Enter passphrase for key '/home/crs/.ssh/id_rsa':
Mon May 19 11:08:13 PST 2008
# ssh stnsp002 date
Enter passphrase for key '/home/crs/.ssh/id_rsa':
Mon May 19 11:08:35 PST 2008
.
.
.
At the end of this process, the public hostname for each member node should be registered in the
known_hosts file for all other cluster member nodes.
If you are using a remote client to connect to the local node, and you see a message similar to "Warning: No
xauth data; using fake authentication data for X11 forwarding," then this means that your authorized keys file
is configured correctly, but your ssh configuration has X11 forwarding enabled. To correct this, proceed as
follows:
b. Make sure that the ForwardX11? attribute is set to no. For example:
Host *
ForwardX11 no
4. (Optional) If you entered passphrase above, and desire password-less login (needed for 10g RAC install),
you need to inform the ssh-agent (already running for desktop) of the passphrase so that ssh clients would not
be prompted for a passphrase. Once you notify ssh-agent the passphrase is cached for duration of the GUI
At the prompt, enter the pass phrase for each key that you generated. For example:
These commands start the ssh-agent on the node, and load the RSA keys into memory so that you are not
prompted to use pass phrases when issuing SSH commands.
If you have configured SSH correctly, then you can now use the ssh or scp commands without being
prompted for a password or a pass phrase. For example:
If any node prompts for a password or pass phrase, then verify that the ~/.ssh/authorized_keys file on that
node contains the correct public keys, and that you have created an Oracle software owner with identical
group membership and IDs.
• Set the default file mode creation mask (umask) to 022 in the shell startup file
umask 022
•
♦ C shell:
• /tmp directory: If you determined that the /tmp directory has less than 400 MB of free disk space,
then identify a file system with at least 400 MB of free space and set the TEMP and TMPDIR
environment variables to specify a temporary directory on this file system Note: You cannot use a
shared file system as the location of the temporary file directory (typically /tmp) for Oracle RAC
installation.
• Install the cvuqdisk Package for Linux: If you are using Red Hat or SUSE Linux, then you must
download and install the operating system package cvuqdisk. Without cvuqdisk, CVU is unable to
discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you
run CVU. Use the cvuqdisk rpm for your hardware (i386, or for Itanium, ia64). To install the
cvuqdisk RPM, complete the following procedure:
♦ Locate the cvuqdisk RPM package, which is in the directory clusterware/rpm on the
installation media. If you have already installed Oracle Clusterware, then it is located in the
directory CRS_home/rpm.
♦ Copy the cvuqdisk package to each node on the cluster. You should ensure that each node is
running the same version of Linux.
♦ Log in as root.
♦ Using the following command, check to see if you have an existing version of the cvuqdisk
package:
•
♦ If you have an existing version, then enter the following command to de-install the existing
version:
#rpm -e cvuqdisk
•
♦ Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk,
typically oinstall.
♦ Use the following command to install the cvuqdisk package:
Note: If you prefer, you can choose to disable CVU shared disk checks by adding the following line to the file
CRS_ home/cv/admin/cvuconfig: CV_RAW_CHECK_ENABLED=FALSE
1. Each node must have at least two network adapters: one for the public network interface, and one for the
private network interface (the interconnect).
2. The public interface names associated with the network adapters for each network must be the same on all
nodes, and the private interface names associated with the network adaptors should be the same on all nodes.
3. For the public network, each network adapter must support TCP/IP.
4. For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed
network adapters and switches that support TCP/IP (Gigabit Ethernet or better recommended). 5. For the
private network, the endpoints of all designated interconnect interfaces must be completely reachable on the
network. There should be no node that is not connected to every private network. You can test whether an
interconnect interface is reachable using a ping command.
Note: UDP is the default interconnect protocol for Oracle RAC, and TCP is the interconnect protocol for
Oracle Clusterware. Token-Ring is not supported for the interconnect.
• The public IP address, which should be recorded in hosts file on each node and, if available, DNS.
This IP Address should be bound to the public adapter before starting the install. It should be a static,
not DHCP, address
• The private IP address, which should be from a different subnet than the public IP address. This
address does not require registering in DNS but you should place an entry in the hosts file on each
node. This IP Address should be bound to the private adapter before starting the install. It should be a
static, not DHCP, address
• A VIP address, which should be from the same subnet as the public IP address and should be
recorded in DNS and the hosts file on each node. This IP Address should NOT be bound to the public
adapter before starting the install. Oracle Clusterware is responsible for binding this address. It should
be a static, not DHCP, address
If you do not have a DNS server then make sure both the public and the VIP addresses are entered into all the
relevant hosts files that are normally found in the /etc/hosts file.
/sbin/ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:0E:0C:08:08:F2
inet addr:10.137.8.215 Bcast:10.137.15.255 Mask:255.255.248.0
inet6 addr: fe80::20e:cff:fe08:8f2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:111936568 errors:0 dropped:0 overruns:0 frame:0
TX packets:617958 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3027847550 (2.8 GiB) TX bytes:138428916 (132.0 MiB)
Base address:0x2880 Memory:fe7c0000-fe7e0000
There are a series of 'ping' tests that should be completed, and then the network adapter binding order should
be checked. You should ensure that the public IP addresses resolve correctly and that the private addresses are
of the form 'nodename-priv' and resolve on both nodes via the hosts file.
• VIP Ping test Pinging the VIP address at this point should fail. VIPs will be activated at the end of the
Oracle Clusterware install.
If any of the above tests fail you should fix name/address resolution by updating the DNS or local hosts files
on each node before continuing with the installation.
If your network adapters allow configuration you should make sure that they are configured for Full Duplex
and at the 'fastest' speed consistent among nodes. They should not be left to 'auto-negotiate'.
Parameters Command
semmsl, semmns,semopm, and semmni # /sbin/sysctl -a | grep sem
for shmall, shmmax, and shmmni # /sbin/sysctl -a | grep shm
file-max # /sbin/sysctl -a | grep file-max
ip_local_port_range # /sbin/sysctl -a | grep ip_local_port_range
rmem_default, rmem_max, wmem_default, and wmem_max # /sbin/sysctl -a | grep net.core
2. If the value of any kernel parameter is less than the recommended value, then complete the following
process: Using any text editor, create or edit the /etc/sysctl.conf file, and add or edit lines similar to the
following:
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 2097152
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
By specifying the values in the /etc/sysctl.conf file, they persist when you restart the system. Note: Include
lines only for the kernel parameter values that you want to change. For the semaphore parameters
(kernel.sem), you must specify all four values. However, if any of the current system parameter values are
greater than the recommended values, then keep using the larger values.
On Red Hat systems, to have these changes take effect immediately so that you do not have to restart the
On SUSE systems only, enter the following command to cause the system to read the /etc/sysctl.conf file
when it restarts: # /sbin/chkconfig boot.sysctl on
4. On SUSE Linux Enterprise Server 9.0 only, set the kernel parameter disable_cap_mlock as follows:
disable_cap_mlock = 1
5. After updating the values of kernel parameters in the /etc/sysctl.conf file, either restart the computer, or run
the command sysctl -p to make the changes in the /etc/sysctl.conf file available in the active kernel memory.
2. Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
3. Depending on the oracle user's default shell, make the following changes to the default shell startup file:
• For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file (or the file
/etc/profile.local on SUSE systems):
• For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file (or the file
/etc/csh.login.local on SUSE systems):
1. The hangcheck_tick parameter: it defines how often, in seconds, the hangcheck-timer checks the
node for hangs. The default value is 60 seconds. Oracle recommends to set it to 1
(hangcheck_tick=1).
2. The hangcheck_margin parameter: it defines how long the timer waits, in seconds, for a response
from the kernel. The default value is 180 seconds. Oracle recommends to set it to 10
(hangcheck_margin=10)
3. The hangcheck_reboot parameter: If the value of hangcheck_reboot is equal to or greater than 1,
then the hangcheck-timer module restarts the system. If the hangcheck_ reboot parameter is set to
zero, then the hangcheck-timer module will not restart the node. It should always be set to 1.
If the kernel fails to respond within the sum of the hangcheck_tick and hangcheck_margin parameter values,
then the hangcheck-timer module restarts the system.
1. Log in as root, and enter the following command to check the kernel version:
# uname -a
2. Enter the following command on each node to determine which kernel modules are loaded: # /sbin/lsmod
on Kernel 2.6 enter a command similar to the following to start the module located in the directories of the
current kernel version:
# insmod /lib/modules/kernel_version/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=1
hangcheck_margin=10 hangcheck_reboot=1
In the preceding command example, the variable kernel_version is the kernel version running on your system
that you have got from uname -a.
4. To confirm that the hangcheck module is loaded, enter the following command:
hangcheck_timer 3289 0
5. To ensure that the module is loaded every time the system restarts, verify that the local system startup file
contains the command shown in the previous step, or add it if necessary:
• on Red Hat: On Red Hat Enterprise Linux systems, add the command to the /etc/rc.d/rc.local file.
• on SUSE: On SUSE systems, add the command to the /etc/init.d/boot.local file.
Show How to Configure SELinux on OEL5 and RHEL5: Hide Configure SELinux on OEL5 and RHEL5:
As per Metalink Note:454196.1, on Oracle Enterprise Linux 5 & Redhat Enterprise Linux 5 it's required to
either disable or set SELinux to permissive mode using one of the methods below:
• Editing /etc/selinux/config:
♦ Edit the kernel boot line and append "selinux=0" (to completely disable), or "enforcing=0"
(to set to permissive mode) to the kernel boot options. For example:
• Do one of the above and then reboot the server for change to take effect, or if a reboot is not possible,
issue as root:
# setenforce 0
Show How to Ensure latest glibc is installed: Hide Ensure latest glibc is installed:
As per Metalink Note:731599.1, to avoid false node reboots, ensure your glibc rpm is updated as follows:
# up2date glibc
Show How to disable deletion of critical Clusterware files: Hide deleting critical Clusterware files:
The Oracle Clusterware places important socket files in /var/tmp/.oracle directory, which can be removed by
Linux crontab cleanup jobs. As per Metalink Note: 391790.1, ensure that such deletion does not occur. The
supplied builtin tmpwatch cronjob:
The Oracle base directory acts as a top-level directory for Oracle software installations. You can use the same
Oracle base directory for more than one installation or you can create separate Oracle base directories for
different installations. If different operating system users install Oracle software on the same system, then
each user must create a separate Oracle base directory. Regardless of whether you create an Oracle base
directory or decide to use an existing one, you must set the ORACLE_BASE environment variable to specify
the full path to the Oracle base directory. Note: The Oracle base directory can be on a local file system or on
an NFS file system on a certified NAS device. Do not create the Oracle base directory on an OCFS version 1
file system. If you are not using an NFS file system, then create identical Oracle base directories on the other
nodes. It must have at least 1.5 GB free disk space on all the nodes in the cluster
The Oracle Inventory directory (oraInventory) stores an inventory of all software installed on the system. It is
required by, and shared by, all Oracle software installations on a single system. The first time you install
Oracle software on a system, Oracle Universal Installer prompts you to specify the path to this directory. If
you are installing the software on a local file system, then Oracle recommends that you choose the following
path: oracle_base/oraInventory If the Oracle base directory is located either on a cluster file system, or on a
shared NFS file system on a NAS device, then you must place the Oracle Central Inventory directory on a
local file system, privately mounted on each node, so that each node has a separate copy of the central
inventory. If you specify a shared location for the Oracle Central Inventory, then each node attempts to write
to the same central inventory. This is not supported. Oracle Universal Installer creates the directory that you
specify, and sets the correct owner, group, and permissions for it. You do not need to create it. Note: All
Oracle software installations rely on the Oracle base directory. Make sure that you back it up regularly. Do not
delete the Oracle base directory unless you have completely removed all Oracle software from the system.
The Oracle Clusterware home directory is the directory where you choose to install the software for Oracle
Clusterware. You must install Oracle Clusterware in a separate home directory. When you run Oracle
Universal Installer, it prompts you to specify the path to this directory, as well as a name that identifies it.
Oracle Universal Installer (OUI) creates the Oracle Clusterware home directory for you. Ensure before you
start the installation that you provide sufficient disk space on a file system for the Oracle Clusterware
directory (at least 1.4GB of free space), and the parent directory of the Oracle Clusterware directory space is
writable by the oracle user.
Note: Because you must change the permissions of all of the parent directories of the Oracle Clusterware
home directory after installing the software to grant write access only to the root user, the Oracle Clusterware
The Oracle home directory is the directory where you choose to install the software for a particular Oracle
product. You must install different Oracle products, or different releases of the same Oracle product, in
separate Oracle home directories. When you run Oracle Universal Installer, it prompts you to specify the path
to this directory, as well as a name that identifies it. The directory that you specify must be a subdirectory of
the Oracle base directory. Oracle Universal Installer creates the directory path that you specify under the
Oracle base directory. It also sets the correct owner, group, and permissions on it. You do not need to create
this directory. Caution: During the installation, you must not specify an existing directory that has predefined
permissions applied to it as the Oracle home directory. If you do, then you may experience installation failure
due to file and group ownership permission errors.
Identifying an existing Oracle Inventory directory Enter the following command on all nodes in the cluster to
view the contents of the oraInst.loc file: On x86 and Itanium systems:
# more /etc/oraInst.loc
# more /var/opt/oracle/oraInst.loc
If the oraInst.loc file exists, then the output from this command is similar to the following:
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
The inventory_loc parameter identifies the Oracle Inventory directory (oraInventory) on that system. The
parent directory of the oraInventory directory is typically an Oracle base directory. In the previous example,
/u01/app/oracle is an Oracle base directory. Identifying existing Oracle home directories Enter the following
command on all nodes in the cluster to view the contents of the oratab file:
# more /etc/oratab
If the oratab file exists, then it contains lines similar to the following:
*:/u03/app/oracle/product/10.2.0/db_1:N
*:/opt/orauser/infra_904:N
The directory paths specified on each line identify Oracle home directories. Directory paths that end with the
user name of the Oracle software owner that you want to use are valid choices for an Oracle base directory.
Before deciding to use an existing Oracle base directory for this installation, make sure that it satisfies the
following conditions:
After the hardware and OS has been configured, it is recommended to run CVU to verify the nodes are
configured correctly:
Interfaces found on subnet "10.137.8.0" that are likely candidates for a private interconnect:
stnsp002 eth0:10.137.8.216
stnsp001 eth0:10.137.8.215
Interfaces found on subnet "10.137.24.0" that are likely candidates for a private interconnect:
stnsp002 eth1:10.137.24.207
stnsp001 eth1:10.137.24.206
Interfaces found on subnet "10.137.20.0" that are likely candidates for a private interconnect:
stnsp002 eth2:10.137.20.172
stnsp001 eth2:10.137.20.171
WARNING:
Could not find a suitable set of interfaces for VIPs.
WARNING:
Package cvuqdisk not installed.
stnsp001
This section describes how to prepare the shared storage for Oracle RAC
For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and
Oracle Real Application Clusters databases files (Oracle RAC). You do not have to use the same storage
option for each file type.
• Block or Raw Devices: Oracle Clusterware files can be placed on either Block or Raw devices based
on shared disk partitions. Oracle recommends using Block devices for easier usage. Refer to note
WebIV:401132.1 if you want to use block devices.
• A supported shared file system: Supported file systems include the following:
♦ A supported cluster file system: OCFS (Linux Kernel 2.4),OCFS2 (Linux Kernel 2.6) or
GPFS (IBM POWER).
♦ Network File System (NFS): A file-level protocol that enables access and sharing of files
Note
1. If you do not have a storage option that provides external file redundancy, then you must configure at
least three voting disk areas to provide voting disk redundancy.
2. If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize
to at least 16384. Oracle recommends that you use the value 32768.
3. You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before
any ASM instance starts.
As root, now configure storage for cluster registry, voting disk and database files. You are presented with a
bunch of disks from the storage array. The output of fdisk –s /dev/sd[b-e] command may look as follows.
/dev/sdb: 116924416
/dev/sdc: 116924416
/dev/sdd: 116924416
/dev/sde: 116924416
/dev/sdb 116G
/dev/sdc 116G
/dev/sdd 116G
/dev/sde 116G
The procedure to create partitions for Oracle Clusterware files on block devices is asfollows:
1. log in as root
2. Enter the fdisk command to format a specific storage disk (for example,/sbin/fdisk /dev/sdb)
3. Create a new partition, and make the partition 280 MB in size for both OCR andvoting disk partitions.
4. Use the command syntax /sbin/partprobe diskpath on each node in thecluster to update the kernel
partition table for the shared storage device on eachnode.
The following is an example of how to use fdisk to create one partition on a shared storage block disk device
for an OCR file:
Login as the root user on the remote nodes and execute the following:
Note: Oracle recommends that you create partitions for Oracle Clusterware files on physically separate disks.
The user account with which you perform the installation (oracle or crs) must have write permissions to create
the files in the path that you specify.
The procedure to create a permissions file to grant oinstall group members write privileges to block devices is
as follows:
1. Log in as root.
2. Change to the /etc/udev/permissions.d directory:
# cd /etc/udev/permissions.d
3. Start a text editor, such as vi, and enter the partition information where you want to place the OCR
and voting disk files, using the syntax device[partitions]:root:oinstall:0640. Note that Oracle
recommends that you place the OCR and the voting disk files on separate physical disks. For
example, to grant oinstall members access to SCSI disks to place OCR files on sdb1 and sdc1, and to
# OCR disks
sdb1:root:oinstall:0640
sdc1:root:oinstall:0640
# Voting disks
sdb5:crs:oinstall:0640
sdc5:crs:oinstall:0640
sdd5:crs:oinstall:0640
4. Save the file:
• On Asianux 2, Enterprise Linux 4, and Red Hat Enterprise Linux 4 systems, savet he file as
49-oracle.permissions.
• On Asianux 3, Enterprise Linux 5, Red Hat Enterprise Linux 5, and SUSE Enterprise Server 10
systems, save the file as 51-oracle.permissions.
1. Using the following command, assign the permissions in the udev file to the devices:
# /sbin/udevstart
Use the following procedure above to create additional partitions to use for the OCR, Voting, and ASM disks.
Refer to the OS documentation for additional information on using the fdisk command.
1. Locate the cvuqdisk RPM package, which is in the directory clusterware/rpm on the installation media. If
you have already installed Oracle Clusterware, then it is located in the directory CRS_home/rpm.
2. Copy the cvuqdisk package to each node on the cluster. You should ensure that each node is running the
same version of Linux.
3. Log in as root.
4. Using the following command, check to see if you have an existing version of the cvuqdisk package:
If you have an existing version, then enter the following command to de-install the existing version:
rpm -e cvuqdisk
3.3. *CVU
To check for all shared storage available across all nodes on the cluster, use the following command:
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster,
then use the following command syntax:
For example:
or you can 'Show' or 'Hide' all by selecting here : Show all Hide all
Next we will install the Oracle Clusterware layer. Oracle Clusterware is an essential component of the Oracle
RAC database infrastructure. Oracle Clusterware does not require any other clustering software. You must not
install any other Cluster Software.
Note: When installing 10gR2 RAC on Oracle Enterprise Linux 5 or RHEL5 or SLES10, you must first install
the base release, which is 10.2.0.1. As these version of OS are newer, you should use the following command
• Notes
♦ The OUI will ask for the inventory directory. The default is ORACLE_BASE/oraInventory. It
will also ask for the Operating System group that will have write permission to the inventory.
• Actions
♦ Specify a location for the Inventory directory and specify the Operating System group
♦ Click Next
• Notes
♦ The OUI will name the Oracle Clusterware Home'OraCrs10g_home'. If you change this you
should make sure that the name you use is unique
• Actions
• Notes
♦ The installer will validate the state of the cluster before continuing. If there are issues you
should rectify them before continuing
• Actions
♦ Click Next
• Notes
♦ Each Cluster requires a name, this should be unique within your organisation, The default is
crs.
♦ This is where you specify details of all the nodes in the cluster. The installer will default
names for the node it is running on. You must add other nodes manually
♦ Oracle defaults the names to 'nodename', 'nodename-priv', 'nodename-vip'
• Actions
◊ Public Node Name : must resolve via hosts and or DNS to the public IP address and
must be live
◊ Private Node Name : must resolve via hosts to the interconnect IP address and must
be live
◊ Virtual Host Name : must resolve via hosts and or DNS to a new IP address and must
not be live
◊ If these are not correct select the node entry and click Edit to modify.
♦ Click Add to enter a new node
• Notes
♦ Here you specify the details of the node you wish to add to the cluster nodes list
• Actions
◊ Public Node Name : must resolve via hosts and or DNS to the public IP address and
must be live
◊ Private Node Name : must resolve via hosts to the interconnect IP address and must
be live
◊ Virtual Host Name : must resolve via hosts and or DNS to a new IP address and must
not be live
♦ Click OK to return to the node list for the cluster
•
♦ The installer lists all the Network adapters. You should have one Adapter correctly identified
as type 'Public' and at least one adapter correctly identified as type 'Private'. The installer will
try and guess the use of an adapter based on the IP address bound. If it guesses incorrectly
you must change the usage. Here it has guessed that all adapters are Private, which is
incorrect.
• Actions
• Notes
• Notes
♦ Next we specify the devices to be used for the Oracle Clusterware vote disks
• Actions
♦ Click Install
• Notes
♦ none required
• Notes
• Notes
• Notes
• Notes
♦ none required
• Notes
♦ Click Exit
To see the resources configured by Oracle Clusterware, run the ./crs_stat -t comand from the Oracle
Clusterware home bin directory
The specific checks included in the tasks of this stage are node reachability, user equivalence, cluster manager
integrity (CSS daemon up), OCR integrity (uniqueness, correct version, no non-clusterconfigs), CRS integrity
(CSS, CRS, EVM daemons up), and node applications existence (VIP, ONS, GSD configured)
Note: "-n all" picks up nodes from lsnodes if you are using vendor clusterware and from olsnodes if you are
The first step in the application of this patchset requires to shutdown the Oracle Clusterware on all nodes in
the RAC configuration. Login as the root and issue the following command on all the nodes:
As Oracle user, enter the following commands to start Oracle Universal Installer, where patchset_directory is
the directory where you unpacked the patch set software:
$ cd patchset_directory/Disk1
$ ./runInstaller
• Notes
♦ Click Next
♦ The installer will validate the state of the cluster before continuing.
• Action
• Notes
♦ Click Install
♦ none required
• Notes
♦ Log in as the root user and enter the following command to shut down the Oracle
Clusterware:
♦ Run the root102.sh script. It will automatically start the Oracle Clusterware on the patched
node:
$ ./runInstaller
• Notes
♦ Here we specify the name and location of ASM home. Modify as required (usually the Home
name and Home Path should include the word ASM)
• Action
• Notes
♦ The installer has detected the presence of Oracle Clusterware and uses this to populate this
dialog box. To build a cluster which includes all nodes you must ensure that there are
check-boxes next to the node names
• Action
♦ The installer will validate the state of the cluster before continuing. If there are issues you
should rectify them before continuing
• Actions
♦ Click Next
• Notes
♦ We are going to install a Software only home and then subsequently configure the software
• Actions
• Notes
♦ Click Install
• Notes
♦ Here the installer copies the software to all nodes in the cluster
• Actions
♦ none required
• Notes
♦ The installer pauses, some scripts need to be run as root on both nodes of the cluster
• Action
♦ here we run the root scripts - they should only take a few seconds to run on each node
• Action
♦ run the scripts indicated in the previous screen on both nodes (one after the other)
♦ then return to the installer and Click OK
• Notes
♦ After the software install completes you will see this End of Installation dialog
• Actions
Enter the following commands to start Oracle Universal Installer, where patchset_directory is the directory
where you unpacked the patch set software:
$ cd patchset_directory/Disk1
$ ./runInstaller
• Notes
♦ Click Next.
• Notes
♦ On this screen, you will register with Oracle Configuration Manager (OCM)
• Action
♦ If you want to register with OCM, enter details regarding the CSI Number, OracleMetaLink?
Account user name and Country code (optional - The registration and configuration can also
be done manually after patchset installation)
♦ Click Next
• Notes
♦ Click Install
♦ The installer pauses, root.sh need to be run as root on both nodes of the cluster
• Action
♦ open a shell window on each node and run root.sh (one after the other)
♦ then return to the installer and Click OK
• Action
• Ensure the ORACLE_HOME environement variable isse tto the recently installed ASM home
location
• Change to the ASM home bin directory
• Run ./netca
$cd /scratch/oracle11/oracle/product/10.2.0/asm/bin
$./netca
• Notes
♦ Netca detects that the Oracle Clusterware layer is running and offers Cluster or Single Node
configuration
• Actions
♦ Netca uses Oracle Clusterware to determine all the nodes in the cluster
• Actions
• Notes
• Notes
♦ The default name is LISTENER. Do not change this. The listeners will eventually be called
LISTENER_nodename1 & LISTENER_nodename2. This is important for RAC
• Actions
♦ Click Next
• Notes
♦ Oracle Net supports various network protocols, although TCP is the most common.
• Actions
♦ Ensure the Use the Standard port number of 1521 radio button is set
♦ Click Next
• Notes
♦ After configuring the node listeners you get the opportunity to configure more network
components
• Actions
♦ Click Next
--------------------
• Notes
You can see the listener resources inside Oracle Clusterware by running crs_stat.
$ ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....13.lsnr application ONLINE ONLINE stnsp013
ora....013.gsd application ONLINE ONLINE stnsp013
ora....013.ons application ONLINE ONLINE stnsp013
ora....013.vip application ONLINE ONLINE stnsp013
ora....14.lsnr application ONLINE ONLINE stnsp014
ora....014.gsd application ONLINE ONLINE stnsp014
A new managed resource has been added to Oracle Clusterware OCR for each listener. You have now
completed the RAC cluster nodes Network listener configuration.
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups
Ensure the ORACLE_HOME environment variable is set to the ASM home directory and run from the ASM
home bin directory
$cd /scratch/oracle11/oracle/product/10.2.0/asm/bin
$./dbca
• Notes
♦ dbca detects the Oracle Clusterware layer is running and offers to create either cluster or
single instance database
• Actions
• Notes
♦ You need to make sure you create ASM instances on all the cluster nodes
• Actions
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 58
• Notes
♦ Here we specify the password for the ASM Oracle SYS user
• Actions
• Notes
♦ Click OK
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 59
• Notes
♦ no action
• Notes
♦ ASM requires disks to be group together into diskgroups. This section will be used to create 2
disk groups +DATA and +FLASH
• Actions
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 60
• Notes
• Notes
♦ Here we specifya filter to allow us to see the disks on the shared array
• Action
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 61
• Notes
♦ none requried
• Notes
♦ Now we will assign disks to specific disk groups and create the DATA diskgroup
• Actions
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 62
• Notes
♦ none
• Notes
♦ Here we can see the DATA diskgroup has been created and is mounted on 2/2 instances. We
now need to create the FLASH diskgroup
• Actions
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 63
• Notes
♦ We need to allow the installer to see the disks reserved for the FLASH disk group
• Action
• Notes
♦ We need to modify the disk discovery string and to add the new path
• Action
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 64
* Notes
•
♦ Now we will assign disks to specific disk groups and create the FLASH diskgroup. This
diskgroup is created with Normal redundancy
• Actions
•
♦ A progress message will be displayed.
• Notes
♦ Here we can see the DATA and FLASH diskgroups have been created and are mounted on
2/2 instances. This completed the ASM configuration.
• Actions
♦ Click Finish
9.1. Create ASM Instance and add the +DATA and +FLASH diskgroups 65
• Notes
♦ Click No
Creation of ASM Instances and addition of the +DATA and +FLASH diskgroups are completed
Congratulations you have installed ASM, Network listeners and created 2 ASM diskgroups
$ ./runInstaller
• Notes
• Notes
♦ Here we specify the name and location of ASM home. Modify as required (usually the Home
name and Home Path should include the word ASM)
• Action
♦ The installer has detected the presence of Oracle Clusterware and uses this to populate this
dialog box. To build a cluster which includes all nodes you must ensure that there are
check-boxes next to the node names
• Action
• Notes
♦ The installer will validate the state of the cluster before continuing. If there are issues you
should rectify them before continuing
• Actions
♦ Click Next
♦ The installer has detected another instance (ASM) and asks if you want to upgrade
• Actions
♦ Click No
• Notes
♦ We are going to install a Software only home and then subsequently configure the software
• Actions
♦ Click Install
• Notes
♦ Here the installer installs and copies the software to all nodes in the cluster
• Actions
♦ none required
♦ The installer pauses, some scripts need to be run as root on both nodes of the cluster
• Action
• Notes
♦ here we run the root scripts - they should only take a few seconds to run on each node
• Action
♦ run the scripts indicated in the previous screen on both nodes (one after the other)
♦ then return to the installer and Click OK
♦ After the software install completes you will see this End of Installation dialog
• Actions
Ensure that all databases using the to-be-patched Oracle Home are fully shutdown on all nodes, then proceed.
Enter the following commands to start Oracle Universal Installer, where patchset_directory is the directory
where you unpacked the patch set software:
$ cd patchset_directory/Disk1
$ ./runInstaller
• Notes
♦ Click Next.
• Notes
♦ On this screen, you will register with Oracle Configuration Manager (OCM)
• Action
♦ If you want to register with OCM, enter details regarding the CSI Number, OracleMetaLink?
Account user name and Country code (optional - The registration and configuration can also
be done manually after patchset installation)
♦ Click Next
• Notes
♦ Click Install
♦ The installer pauses, root.sh need to be run as root on all nodes of the cluster
• Action
♦ open a shell window on each node and run root.sh (one after the other)
♦ then return to the installer and Click OK
Ensure the ORACLE_HOME environment variable is set to the new RAC Home and launch dbca from the
RAC database home change to the RAC Home bin directory
# cd $ORACLE_HOME/bin
# run ./dbca
• Notes
♦ dbca detects the Oracle Clusterware layer is running and offers to create either cluster or
single instance database
• Actions
• Notes
• Notes
♦ You need to make sure you create RAC database instances on all the cluster nodes
• Actions
• Notes
♦ Now you specify the preface for the SIDs for the RAC database instances – recommendation
is to keep it simple
• Actions
♦ Enter your database name in the Global Database name, The SID should autofill
♦ Click Next
♦ Here you get to configure Oracle Enterprise Manager. If you have no Grid Control server then
the best method is to use dbcontrol as detailed here.
• Actions
• Notes
♦ Here we specify the password for the Oracle Database users. In this example we are setting
the same password for all users. You should set a password scheme that meets your
requirements.
• Actions
♦ Here you specify where you would like your database datafiles stored. We are going to use
the ASM diskgroups we created earlier.
• Actions
• Notes
♦ dbca displays the diskgroups we previously created. We are going to use the DATA
diskgroup
• Actions
♦ We will use Oracle Managed files . All the database files will be created on the DATA
diskgroup.
• Actions
♦ Click Next
• Notes
♦ Here we specify recovery configuration information. We are going to use a flash recovery
area.
• Actions
• Notes
♦ Click Next
• Notes
♦ Click Next
• Notes
♦ A summary screen
• Actions
♦ Click OK
♦ none
• Notes
♦ After the database is created the summary screen is displayed. Note the URL for the Database
Control
• Actions
♦ none
You can see that the cluster has registered started the database instances on each node
# cd <CRS_home>/bin
# ./crs_stat -t