You are on page 1of 48

Table of Contents

1. HARDWARE ARCHITECTURE ..................................................................................... 3 1.1 1.2 1.3 1.4 1.5 2. 3. 4. 5. 2.1 Overall configuraiton ............................................................................................... 3 DB severs : IBM pSeries Servers (p595)................................................................. 3 Operating System ................................................................................................... 4 Network Infrastructure ............................................................................................. 4 AIX virtual I/O Disks ................................................................................................ 4 DC and DR database extended RAC ................................................................... 5

BASIC Database information for IBPS ........................................................................... 5 Way to Implement Oracle 10g RAC on AIX 5L ............................................................... 7 CHECK LIST TO USE AND FOLLOW ........................................................................... 8 PREPARING THE SYSTEM .......................................................................................... 9 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 Hardware Requirement ........................................................................................... 9 Software Requirements ........................................................................................ 13 Tuning AIX System Enviroment ............................................................................ 14 Users and Groups ................................................................................................. 17 Network Configuration .......................................................................................... 18 Local Disk for Oracle code .................................................................................... 19 Node Time Requirement ....................................................................................... 19 User Equivalence Setup ....................................................................................... 20 Running the rootpre.sh script ................................................................................ 20 Update .profile (User Enviroment) ......................................................................... 20 Verify Oracle Clusterware Requirements with CVU............................................... 21 Preparing to install Oracle CRS with OUI .............................................................. 21 Confirming Oracle Clusterware Funtion ................................................................ 22 Oracle Clusterware Postinstallation Procedures ................................................... 22 Tuning parameter file ............................................................................................ 23 Two node extended rac (DC) ................................................................................ 24

6.

INSTALLING ORACLE CLUSTERWARE .................................................................... 21 6.1 6.2 6.3 6.4

7. 8.

TASKS for After Installation ......................................................................................... 23 7.1 8.1 Installation screen shot ................................................................................................ 24 CRS Installation screen shot ........................................................................................ 24 Oracle Software installation screen shot ...................................................................... 31 Configure ASM Instance .............................................................................................. 37 8.2 Two node extenede rac (DR) ................................................................................ 44 CRS Installation screen shot ........................................................................................ 44

Oracle Software installation screen shot ...................................................................... 46

1. HARDWARE ARCHITECTURE
For our infrastructure, we used a cluster, which is composed of Five IBM pSeries (p595). This chapter will be revision after HW implementation completed

1.1 Overall configuraiton

1.2 DB severs : IBM pSeries Servers (p595)


This is the IBM pSeries server, we used for installation: Server Configurations Storage Configurations

1.3 Operating System


Operating System must be installed the same way on all nodes of the cluster, with the same version, maintenance level, with the same APARS and FILESETS level. OS Version supported: AIX 5L 5.3, ML 02 or later.

1.4 Network Infrastructure


You must have three network addresses for each node. A public IP address A virtual IP address, which is used by application for failover in the event of node failure. A private address, which is used by Oracle Clusterware and Oracle RAC for internodes communication. The IP address and hostname are currently unused (it can be registered in a DNS, but it should not be accessible by ping command) The virtual IP address is on the same subnet as your public interface It should be on a subnet reserved for private networks, such as 10.0.0.0 or 192.168.0.0 It should use dedicated switches or a physical separate, private network, reachable only by the cluster member nodes, preferably using high-speed NICs. It must use the same private interfaces for both Oracle Clusterware and RAC private IP address.

The virtual IP address has the following requirements

The private address has the following requirements

Ping all IP address. The public and private IP addresses should respond to ping commands. The VIP address should not respond.

1.5 AIX virtual I/O Disks


You can use virtual I/O disks for: Oracle Clusterware ($ORACLE_CRS_HOME) Oracle RAC software ($ORACLE_HOME) OCR and Voting disks. Oracle database files.

But not to be used for:

2. BASIC DATABASE INFORMATION FOR IBPS


2.1 DC and DR database extended RAC
1) DB SERVER
CATEGORY DB_NAME OS ACCOUNT PORT CHARACTER SET
DB_BLOCK_SIZE

DC DCDB oracle:dba 1521 UTF8 8K /u01/app/db /u01

DR DRDB oracle:dba 1521 UTF8 8K /u01/app/db /u01

REMARK DC : 2-node DR : 2-node

ORACLE_HOM E MOUNT PING


FILESYSTEM

$ORACLE_CRS_HOME $ORACLE_HOME

DATABASE SERVER IDENTIFYING (DC 2NODE RAC, DR 2NODE RAC) 1) SERVER NAME
CATEGORY DC dcdb1 dcdb2 DR drdb1 drdb2 REMARK -DC (cluster-name), DR (cluster name) -dbx(node order),

SERVER NAME

2) ETC/HOSTS
CATEGORY ETC/HOSTS # Public network 192.168.10.10 dcdb1 # node name db1 192.168.10.12 dcdb2 # node name db2 # Oracle RAC interconnect network 172.16.100.14 dcdb1-priv # db1 private IP 172.16.100.16 dcdb2-priv # db2 private IP #Virtual IP for oracle 192.168.10.14 dcdb1-vip # db1 virtual IP 192.168.10.16 dcdb2-vip # db2 virtual IP REMARK

DC

DR

# Public network 10.192.10.10 drdb1 # drdb1 10.192.10.12 drdb2 # drdb2 # Oracle RAC interconnect network 172.16.100.14 drdb1-priv # bak db1 private IP 172.16.100.16 drdb2-priv # bak db2 private IP #Virtual IP for oracle 10.192.10.14 drdb1-vip # bak db1 virtual IP 10.192.10.16 drdb2-vip # bak db2 virtual IP

3. WAY TO IMPLEMENT ORACLE 10G RAC ON AIX 5L


We will implement Oracle 10g RAC on AIX 5L at SBV Bank as follows:

ASM is used for Oracle database storage


NO NEED for HACMP NO NEED for GPFS Oracle database files (datafiles, redo log files, archive logs) are stored on the disks managed by Oracle ASM. CRS files (OCR and Voting) are placed on raw disks.

4. CHECK LIST TO USE AND FOLLOW


This is the list of operations you should do, before moving to Oracle installation steps:
Done on Yes/No? Node 1 each node:

Operations Check the Hardware Requirements 1 Check the Network Requirements 2 Check the Software Requirements Tuning the AIX System Environment Create Required UNIX Groups and Users Configure kernel Parameters and Shell Limit Identify Required Software Directories Identify or Create an Oracle Base Directory Create the CRS Home Directory Choose Storage Option for Oracle CRS, Database, and Recovery Files Create LUNs for Oracle CRS, Database, and Recovery Files Configure Disk for ASM Synchronize the System Time on Cluster Nodes Stop Existing Oracle Processes Configure Oracle User Environment User Equivalence Setup Running the rootpre.sh script Voting Disk on the third site setup

Node 2

5. PREPARING THE SYSTEM


5.1 Hardware Requirement
You must ensure that each system meets these requirements, follow these steps: 5.1.1 To determine the physical RAM size, enter the following command: # /usr/sbin/lsattr -E -l sys0 -a realmem Minimum RAM required is 1GB With DC site. Actual Value (GB) 32GB 32GB Remark

Node 1 Node 2 With DR site.

Node 1 Node 2 5.1.2 # /usr/sbin/lsps a

Actual Value (GB) 32GB 32GB

Remark

To determine the SWAP size: a) SWAP is twice of RAM, if RAM is smaller than 2GB b) SWAP is equal to RAM, if RAM is between 2GB and 8GB c) SWAP is 0.75 times the size of RAM, if RAM is greater than 8GB But in SBV, SWAP less than 0.75 times of RAM .Because, RAM size is too big With DC site. Node 1 Node 2 With DR site. Node 1 Node 2 Actual Value (GB) 20GB 20GB Remark Actual Value (GB) 20GB 20GB Remark

5.1.3 To determine the amount of disk space available in the /tmp directory, enter the following command. # df -k /tmp The minimum requirement of /tmp size is 400MB. With DC site Actual Value (GB) 10GB Remark

Node 1

Node 2

10GB

With DR site Actual Value (MB) 10GB 10GB Remark

Node 1 Node 2 5.1.4

Configure OCR and Voting disks for DCDB and DRBD. On DC system and DR system , OCR(s) and Voting Disks will be placed on Raw Disks The size of each OCR and Voting disk is at least 256MB. (In SBV is 1GB) OCR configuration. - Size of OCR disk 1GB - Owner, permission of OCR disk - oracle:dba, 660 - Location of OCR disk: 2 OCR disk in DS8K at each pair of cluster Voting disk configuration. - Size of vote disk 1 GB - Owner, permission of vote disk - oracle:dba, 660 - Location of Vote disk: 3 Vote disk in DS8K at each pair of cluster

5.1.5

Steps to configure OCR and Voting for PDC and DR sites: 1) LUNs creation Disks OCR Voting LUNs ID Number LUNs Size 1GB 1GB

2) Preparing Raw Disks for OCR and Voting Disks OCR OCR Voting Voting Voting LUNs Number ID Node 1 hdisk hdisk9 hdisk10 hdisk4 hdisk5 hdisk6 Node 2 hdisk hdisk9 hdisk10 hdisk4 hdisk5 hdisk6

3) Following the steps below to configure Disk devices for OCR and Voting. Identify or configure the required disk devices The disk devices must be shared on all of the cluster nodes. As the root user, enter the following command on any node to identify the device names for the devices that you want to use. # lspv | grep -i none

This command displays information similar to the following for each device that is not configured in a volume group: hdisk17 0009005fb9c23648 None where: - hdisk17 is the device name - 0009005fb9c23648 is the physical volume ID (PVID) If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it (This is for a moment). # chdev -l hdiskn -a pv=yes If you have an existing PVID, then chdev will overwrite the existing PVID, which will cause applications depending on the previous PVID to fail. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node. # lspv | grep -i "0009005fb9c23648" The output from this command should be similar to the following: hdisk18 0009005fb9c23648 None The device name associated with this device on this node is hdisk18, on the primary node is hdisk17 (they are different names) If the device names are the same on all nodes, then enter the following command on all nodes to change owner, group and permissions on the character raw device files for the disk devices: - OCR device: # chown root:oinstall /dev/rhdiskn # chmod 640 /dev/rhdiskn - Voting device: # chown oracle:dba /dev/rhdiskn # chmod 660 /dev/rhdiskn If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the node using a common unused name. To create a new device file for a disk device on all nodes, perform these steps on each node: a) To determine the device major and minor number do the following: # ls -alF /dev/*hdiskn The output from this command is similar to the following: brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn In this case, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number. b) To create a new device file, enter the following command: # mknod /dev/ora_ocr_raw_280m c 24 8192 c) Enter commands similar to the following to change the owner, group and permissions on the character raw device file for the disk: - OCR:

# chown root:oinstall /dev/ora_ocr_raw_280m # chmod 640 /dev/ora_ocr_raw_280m - Voting: # chown oracle:dba /dev/ora_vote_raw_280m # chmod 660 /dev/ora_vote_raw_280m d) Enter following command to verify that you have created the new device file successfully: # ls -alF /dev | grep "24,8192" The output should be similar to the following: brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw-r----- 1 root oinstall 24,8192 Dec 05 2001 /dev/ora_ocr_raw_280m crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute, depending on the type of reserve attribute used by your disks. Follow the steps below to perform this task using hdisk logical names. To determine the reserve setting your disks use, enter command below: # lsattr -E -l hdiskn | grep reserve_ The response is either a reserve_lock setting, or a reserve_policy setting. If the attribute is reserve lock, then ensure the setting is reserve_lock=no If the attribute is reserve policy, then ensure the setting is reserve_policy=no_reserve If necessary, change the setting with chdev command using the following syntax. #chdev -l hdiskn -a [ reserve_lock=no | reserve_policy=no_reserve ] Enter command similar to the following on any node to clear the PVID from each disk device that you want to use. (CLEAR PVID for each disks) # chdev -l hdiskn -a pv=clear Format (Zeroing) and Verify devices Concurrent Read/Write access by running at the same time dd command from each node: At the same time At the same time From node 1 From node 2 dd if=/dev/zero of=/dev/hdiskx dd if=/dev/zero of=/dev/hdiskx bs=8192 count=25000 bs=8192 count=25000 Repeat this operation with Repeat this operation with others disk devices others disk devices 5.1.6 To determine if the system architecture can run the software, enter the following command. # getconf HARDWARE_BITMODE The output of this command should be 64; otherwise you can not install the software on this system. 5.1.7 To determine if the system is started in 64-bit mode.

# bootinfo K The result of this command should be 64, indicating that the 64-bit kernel is enabled.

5.2 Software Requirements


Depending on the products that you indent to install verify that following software is installed on this system. 5.2.1 OS version requirement
AIX Release supported with Oracle 10g RAC AIX 5L version 5.3 Maintenance Level 02 Release 2 or latyer (64bit) To determine which version of AIX is #oslevel -r installed, enter command #oslevel r With SBV system, the result of the command is 5300-08-01-0819

5.2.2

OS FILESETS requirement
Operaing System Filesets The following operating system filesets are required: bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix50.rte 7.0.0.4 xlC.rte 7.0.0.1 You must have the xlC C/C++ runtime filesets for installation, but you do not require the C/C++ compiler. ASM is required AS we use ASM for Clusterware files and for Oracle Database files. OC Systems PowerAda 5.4d IBM JDK 1.4.2 is installed with this release IBM XLPortran v 10.1 for AIX - GNUfind 4.1 - gdb 6.0 - Gmake 3.80 - Gnutar 1.13 - Perl 5.005_03 + MIME 2.21 - Perl 5.6 + MIME 2.21 - Perl 5.8.3 - Python 2.2 - Unzip 5.4.2 - Zip 2.3 N/A

Oracle Real Application Cluster

ADA JDK Pro*PORTRAN Utilities

- Pro*C/C++

- Oracle Call Interface - Oracle C++ Call Interface - Oracle XML developers Kit - GNU compiler Collection To determine whether the required filesets are installed and commited, enter the command similar to the following: # lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat \ bos.perf.libperfstat bos.perf.proctools rsct.basic.rte

5.2.3

AIX APAR and other operating System Fixes


All Installation All AIX 5L v. 5.3 installations Authorized Problem Analysis Reports (APARs) for AIX 5L v. 5.3 ML02 or Plus, and the following AIX fixes: IY68989: WRITE TO MMAPPED SPACE HANGS IY68874: An application that is run wtyle='font-size:10.5pt'> IY70031: CORRUPTION FROM SIMULTANEOUS CIO WRITES WITH O_DSYNC ON JFS2 If using the IBM Journal File System Version 2 (JFS2) for Oracle Database files. NOTE: All Oracle 9i Database and Oracle 10g Database customers who are running on AIX 5L V5.3 Technology Level 5 (TL 5300-05) must install the IBM AIX PTF for APAR IY89080. In addition, Oracle customers should contact Oracle support to obtain the fix for Oracle Bug 496862. IZ03260: LIO_LISTIO FAILS TO UPDATE AIO CONTROL BLOCKS ON ERROR APPLIES TO AIX 5300-06 (for AIX 5.3 TL06 customers). IZ03475: LIO_LISTIO FAILS TO UPDATE AIO CONTROL BLOCKS ON ERROR APPLIES TO AIX 5300-07 (for AIX 5.3 TL07 customers). To determine the required APAR are installed, enter: # instfix -i -k " IY68989"

5.3 Tuning AIX System Enviroment


The parameter and shell limit values shown in this section are recommended Values only. For production Database System, Oracle recommends that you tunes these values to optimize the performance of the system.

5.3.1

Tuning Virtual Memory Manager (VMM) Oracle recommends that you use the vmo command to tune virtual memory using the following values. Parameter value minperm% 3 (default is 20) maxperm% 90 (default is 80)

maxclient% = 90 lru_file_repage strict_maxclient strict_maxperm For example: vmo -p -o minperm%=3 vmo -p -o maxperm%=90 vmo -p -o maxclient%=90 vmo -p -o lru_file_repage=0 vmo -p -o strict_maxclient=1 vmo -p -o strict_maxperm=0

90 (default is 80) 0 (default is 1) 1 (default is 1) 0 (default is 0)

You must restart the system for these changes to take effect.

5.3.2

Configuring Shell Limits To improve the software performance, you must increase the following shell limits:
Shell Limit Item in limits.conf Hard limit

Maximum number of open file descriptors nofile Maximum number of processes available to a maxuproc single user To increase the shell limits: 1. Add the following lines to the /etc/security/limits file: default: fsize = -1 core = -1 cpu = -1 data = 512000 rss = 512000 stack = 512000 nofiles = 2000

65536 16384

2. Enter the following command to list the current setting for the maximum number of process allowed by the Oracle software user: /etc/lsattr -E -l sys0 -a maxuproc If necessary, change the maxuproc setting using the following command: /etc/chdev -l sys0 -a maxuproc = 16384 3. Repeat this procedure on all other nodes in the cluster

5.3.3

Configuring User Process Parameters Verify that the maximum number of processes allowed for each user is set to 2048 or greater: 1. Enter the following command:

# smit chgsys 2. Verify that the value shown for Maximum number of PROCESSES allowed for each user is greater than or equal to 2048. If necessary, edit the existing value. 3. When you have finished making changes, press F10 to exit.

5.3.4

Configuring Network Tuning Parameters Verify that the Network Tuning Parameters shown in the following table are set to the values shown or higher values: Network Tuning Parameter Recommended Value ipqmaxlen 512 rfc1323 1 sb_max 2*655360 tcp_recvspace 65536 tcp_sendspace 65536 udp_recvspace 655360 Note: The recommended value of this parameter is 10 times the value of the udp_sendspace parameter. The value must be less than the value of the sb_max parameter udp_sendspace 65536 production databases, the minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE initialization parameter multiplied by the value of the DB_MULTIBLOCK_READ_COUNT initialization parameter: (DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB

How to view the current setting and change them as required. 1. To check the current values: # no -a | more 2. If you must change the value of any parameter, you must determine whether the system is running in compatibility mode: # lsattr -E -l sys0 -a pre520tune If the result is: pre520tune enable Pre-520 tuning compatibility mode True It means, the system is running in compatibility mode: 3. If the system is running in compatibility mode, follow the steps bellow to change the value: a. Enter command bellow to change value for each parameter # no -o parameter_name=value For example: # no -o udp_recvspace=655360 b. Add entries similar to the following to /etc/rc.net file for each parameter that you changed in previous step: if [ -f /usr/sbin/no ] ; then /usr/sbin/no -o udp_sendspace=65536 /usr/sbin/no -o udp_recvspace=655360 /usr/sbin/no -o tcp_sendspace=65536 /usr/sbin/no -o tcp_recvspace=65536 /usr/sbin/no -o rfc1323=1 /usr/sbin/no -o sb_max=2*655360 /usr/sbin/no -o ipqmaxlen=512 fi 4. If the system is not running in compatibility mode: For ipqmaxlen parameter #no -r -o ipqmaxlen=512 Others no -p -o parameter=value

5.3.5

Increasing Space Block Size Allocation Oracle recommends that you increase the space allocated for ARC/ENV list to 128. #chdev -l sys0 -a ncargs='128'

5.4 Users and Groups


You must create the following Users and Groups for Oracle Clusterware and RAC installation and Management: Create oinstall group for software owner group and inventory Create the dba group Create oracle user for CRS and RAC softwrae onwer

The Group ID and User ID for oinstall, dba and oracle must be identical over the member nodes of Cluster.

oinstall dba oracle

#mkgroup id=500 oinstall #mkgroup id=501 dba Using smit security: ID=500 This user has the following attributes: - oinstall is the PRIMARY GROUP - dba is the SET GROUP Suppose Oracle Clusterware and Oracle RAC are installed on mount point /u01 (/u01 is owner by root), do the following to create Oracle Software owner and Group. Change /u01 to any you have. /u01: Mount point for Oracle Software installation (owner by root) /u01/app: Oracle Base

/u01/app/crs: Oracle CRS Home /u01/app/oracle: Oracle RAC HOME #mkdir /u01/app #chown oracle:oinstall /u01/app #chmod R 775 /u01/app/ #mkdir /u01/app/crs #mkdir /u01/app/oracle

5.5 Network Configuration


1. Network hardware Requirements: Each node must have at least two network adapters, one for public network interface and one for private network interface (the interconnect). The public interface names associated with the network adapters for each network must be the same on all nodes, and the private interface names associated with the network adapters should be the same on all nodes. For increased reliability, configure redundant public and private network adapters for each node. For public network, each network adapter must support TCP/IP For private network, the interconnect must support the user diagram protocol (UDP) using high-speed network adapters and switchs that supports TCP/IP (Gigabit Ethernet or better).

2. Node name and IP identification With DC site Public En0 Node name dcdb1 dcdb2 IP 192.168.10.10 192.168.10.12 Node name dcdb1_vip dcdb2_vip VIP En0 IP 192.168.10.14 192.168.10.16 RAC Interconnect (Private network) En8 Node name IP dcdb1_priv dcdb2_priv 172.16.100.14 172.16.100.16

With DR site Public En20 VIP En20 RAC Interconnect (Private network) En16

Node name drdb1 drdb2

IP 10.192.10.10 10.192.10.12

Node name bnspscdb1_vip drdb2_vip

IP 10.192.10.14 10.192.10.16

Node name drdb1_priv drdb2_priv

IP 172.16.100.14 172.16.100.16

3. Host file setup

/etc/hosts
With DC site

# Public network 192.168.10.10 dcdb1 # node name db1 192.168.10.12 dcdb2 # node name db2 # Oracle RAC interconnect network 172.16.100.14 dcdb1-priv # db1 private IP 172.16.100.16 dcdb2-priv # db2 private IP #Virtual IP for oracle 192.168.10.14 dcdb1-vip # db1 virtual IP 192.168.10.16 dcdb2-vip # db2 virtual IP

With DR site

# Public network 10.192.10.10 drdb1 # drdb1 10.192.10.12 drdb2 # drdb2 # Oracle RAC interconnect network 172.16.100.14 drdb1-priv # bak db1 private IP 172.16.100.16 drdb2-priv # bak db2 private IP #Virtual IP for oracle 10.192.10.14 drdb1-vip # bak db1 virtual IP 10.192.10.16 drdb2-vip # bak db2 virtual IP

5.6 Local Disk for Oracle code


The Oracle code (Oracle Clusterware and RAC software) can be located on an internal disk. Regular file systems are used for Oracle code. Node: you can also use virtual I/O disks for Oracle code. Preferred mount point for Oracle code is /u01. It is recommended to have 30-50GB space for Oracle code.

5.7 Node Time Requirement


Before starting the installation, ensure that each member node of the cluster is set as closely as posible to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating system for this purpose.

5.8 User Equivalence Setup


Before you install and use Oracle Real Application Clusters, you must configure secure shell (SSH) for oracle user on all cluster nodes. This task will be done by IBM engineer.

5.9 Running the rootpre.sh script


Note: Do not run this script if you have a later release of the Oracle Database software already installed on this system. 1. Switch User to root $su - root 2. Run the rootpre.sh script #<source directory>/rootpre.sh 3. exit from the root account 4. Repeated these steps on all cluster nodes.

5.10

Update .profile (User Enviroment)


ORACLE_BASE=/u01/app; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/oracle; export ORACLE_HOME PATH=$PATH:$ORACLE_HOME/bin; export PATH ORACLE_SID=<SID>; export ORACLE_SID AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE umask 022

6. INSTALLING ORACLE CLUSTERWARE


6.1 Verify Oracle Clusterware Requirements with CVU
Using the following command to verify and check system requirements before starting to install Oracle Clusterware. $/mountpoint/runcluvfy.sh stage -pre crsinst -n node_list The Cluster Verification Utility Oracle Clusterware preinstallation stage check verifies the following: Node Reach ability: User Equivalence: Node Connectivity: Administrative Privileges: Shared Storage Accessibility: System requirements: Kernel Packages: Node Applications:

6.2 Preparing to install Oracle CRS with OUI


Shutdown all running Oracle Processes Determine the Oracle Inventory Location (oraInventory) Oracle Inventory Location Obtain root account access Determine cluster name, public node names, private node names, and virtual node names for each node in the cluster. /u01/app

With DC site
CRS Node Node1 Node2 Public dcdb1 dcdb2 IP 192.168.1 0.10 192.168.1 0.12 VIP dcdb1_vip dcdb2_vip IP 192.168.1 0.14 192.168.1 0.16 Private dcdb1_priv dcdb2_priv IP 172.16.100.14 172.16.100.16

With DC site
Identify the shared storage for Clusterware files OCR Location OCR Location Voting Location Voting Location Voting Location /dev/hdisk9 /dev/hdisk10 /dev/hdisk4 /dev/hdisk5 /dev/hdisk6

With DR site
CRS Node
Node1

Public
drdb1

IP
10.192.10.10

VIP
bnspscdb 1_vip

IP
10.192.10.14

Private
drdb1_priv

IP
172.16.100.14

Node2

drdb2

10.192.10.12

drdb2_vi p

10.192.10.16

drdb2_priv

172.16.100.16

Identify the shared storage for Clusterware files OCR Location OCR Location Voting Location Voting Location Voting Location /dev/hdisk9 /dev/hdisk10 /dev/hdisk4 /dev/hdisk5 /dev/hdisk6

6.3 Confirming Oracle Clusterware Funtion


After installation, log in as root, and use the following command syntax to confirm that your Oracle Clusterware installation is installed and running correctly #CRS_HOME/bin/crs_stat t

6.4 Oracle Clusterware Postinstallation Procedures


1. Required Postinstallation Tasks a) Backup the voting disk after installation Using cp command to backup the Voting disk Perform this task after you complete any installation, node addition, and node deletions. b) Download and Install Patch updates List of Patches to apply. - Patchset 3 (10.2.0.4) 2. Recommended Postinstallation Tasks Oracle recommends that you backup root.sh script after you complete an installation.

7. TASKS FOR AFTER INSTALLATION


7.1 Tuning parameter file
ITEM VALUE REMARK
Oracle databases requiring high performance will usually benefit from running with a pinned Oracle SGA $ /usr/sbin/vmo -r -o v_pinshm=1 $ /usr/sbin/vmo -r -o maxpin%=percent_of_real_memory Where percent_of_real_memory = ( (size of SGA / size of physical memory) *100) + 3 Set LOCK_SGA parameter to TRUE in the init.ora Need to stop/start Oracle to change

LOCK_SGA

True

8. INSTALLATION SCREEN SHOT


8.1 Two node extended rac (DC)
CRS Installation screen shot
Begin Installation:

Specify Inventory directory and credentials:

Specify Home Details:

Product Specific Prerequisite Checks:

Specify Cluster Configuration:

Specify Network Interface Usage:

Specify Oracle Cluster Registry (OCR) Location:

Specify Voting Disk Location:

Summary:

Installation progress:

Install VIP CA:

Configuration Assistant Progress Dialog:

End of Installation:

Oracle Software installation screen shot


Begin of installation:

Select Installation Type:

Specify Home Details

Specify Hardware Cluster Installation Mode:

Product Specific Prerequisite Checks

Select Configuration Options:

Summary:

Install progress

Execute Configuration scripts

Running scripts

Configure ASM Instance


Database configuration Assistant : Operations

Database Configuration Assistant : Node Selections

Create ASM Instance

ASM Disk Groups:

Create disk group:

Create database : dcore

8.2 Two node extenede rac (DR)


CRS Installation screen shot
Specify Home Detailss:

Specify Hardware Cluster Installation Mode:

Product Specific Prerequisite Checks:

End of Installation:

Running scripts:

Oracle Software installation screen shot


Specific Home Details:

Product Specific Prerequisite Checks:

You might also like