Professional Documents
Culture Documents
1. HARDWARE ARCHITECTURE ..................................................................................... 3 1.1 1.2 1.3 1.4 1.5 2. 3. 4. 5. 2.1 Overall configuraiton ............................................................................................... 3 DB severs : IBM pSeries Servers (p595)................................................................. 3 Operating System ................................................................................................... 4 Network Infrastructure ............................................................................................. 4 AIX virtual I/O Disks ................................................................................................ 4 DC and DR database extended RAC ................................................................... 5
BASIC Database information for IBPS ........................................................................... 5 Way to Implement Oracle 10g RAC on AIX 5L ............................................................... 7 CHECK LIST TO USE AND FOLLOW ........................................................................... 8 PREPARING THE SYSTEM .......................................................................................... 9 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 Hardware Requirement ........................................................................................... 9 Software Requirements ........................................................................................ 13 Tuning AIX System Enviroment ............................................................................ 14 Users and Groups ................................................................................................. 17 Network Configuration .......................................................................................... 18 Local Disk for Oracle code .................................................................................... 19 Node Time Requirement ....................................................................................... 19 User Equivalence Setup ....................................................................................... 20 Running the rootpre.sh script ................................................................................ 20 Update .profile (User Enviroment) ......................................................................... 20 Verify Oracle Clusterware Requirements with CVU............................................... 21 Preparing to install Oracle CRS with OUI .............................................................. 21 Confirming Oracle Clusterware Funtion ................................................................ 22 Oracle Clusterware Postinstallation Procedures ................................................... 22 Tuning parameter file ............................................................................................ 23 Two node extended rac (DC) ................................................................................ 24
6.
7. 8.
TASKS for After Installation ......................................................................................... 23 7.1 8.1 Installation screen shot ................................................................................................ 24 CRS Installation screen shot ........................................................................................ 24 Oracle Software installation screen shot ...................................................................... 31 Configure ASM Instance .............................................................................................. 37 8.2 Two node extenede rac (DR) ................................................................................ 44 CRS Installation screen shot ........................................................................................ 44
1. HARDWARE ARCHITECTURE
For our infrastructure, we used a cluster, which is composed of Five IBM pSeries (p595). This chapter will be revision after HW implementation completed
Ping all IP address. The public and private IP addresses should respond to ping commands. The VIP address should not respond.
$ORACLE_CRS_HOME $ORACLE_HOME
DATABASE SERVER IDENTIFYING (DC 2NODE RAC, DR 2NODE RAC) 1) SERVER NAME
CATEGORY DC dcdb1 dcdb2 DR drdb1 drdb2 REMARK -DC (cluster-name), DR (cluster name) -dbx(node order),
SERVER NAME
2) ETC/HOSTS
CATEGORY ETC/HOSTS # Public network 192.168.10.10 dcdb1 # node name db1 192.168.10.12 dcdb2 # node name db2 # Oracle RAC interconnect network 172.16.100.14 dcdb1-priv # db1 private IP 172.16.100.16 dcdb2-priv # db2 private IP #Virtual IP for oracle 192.168.10.14 dcdb1-vip # db1 virtual IP 192.168.10.16 dcdb2-vip # db2 virtual IP REMARK
DC
DR
# Public network 10.192.10.10 drdb1 # drdb1 10.192.10.12 drdb2 # drdb2 # Oracle RAC interconnect network 172.16.100.14 drdb1-priv # bak db1 private IP 172.16.100.16 drdb2-priv # bak db2 private IP #Virtual IP for oracle 10.192.10.14 drdb1-vip # bak db1 virtual IP 10.192.10.16 drdb2-vip # bak db2 virtual IP
Operations Check the Hardware Requirements 1 Check the Network Requirements 2 Check the Software Requirements Tuning the AIX System Environment Create Required UNIX Groups and Users Configure kernel Parameters and Shell Limit Identify Required Software Directories Identify or Create an Oracle Base Directory Create the CRS Home Directory Choose Storage Option for Oracle CRS, Database, and Recovery Files Create LUNs for Oracle CRS, Database, and Recovery Files Configure Disk for ASM Synchronize the System Time on Cluster Nodes Stop Existing Oracle Processes Configure Oracle User Environment User Equivalence Setup Running the rootpre.sh script Voting Disk on the third site setup
Node 2
Remark
To determine the SWAP size: a) SWAP is twice of RAM, if RAM is smaller than 2GB b) SWAP is equal to RAM, if RAM is between 2GB and 8GB c) SWAP is 0.75 times the size of RAM, if RAM is greater than 8GB But in SBV, SWAP less than 0.75 times of RAM .Because, RAM size is too big With DC site. Node 1 Node 2 With DR site. Node 1 Node 2 Actual Value (GB) 20GB 20GB Remark Actual Value (GB) 20GB 20GB Remark
5.1.3 To determine the amount of disk space available in the /tmp directory, enter the following command. # df -k /tmp The minimum requirement of /tmp size is 400MB. With DC site Actual Value (GB) 10GB Remark
Node 1
Node 2
10GB
Configure OCR and Voting disks for DCDB and DRBD. On DC system and DR system , OCR(s) and Voting Disks will be placed on Raw Disks The size of each OCR and Voting disk is at least 256MB. (In SBV is 1GB) OCR configuration. - Size of OCR disk 1GB - Owner, permission of OCR disk - oracle:dba, 660 - Location of OCR disk: 2 OCR disk in DS8K at each pair of cluster Voting disk configuration. - Size of vote disk 1 GB - Owner, permission of vote disk - oracle:dba, 660 - Location of Vote disk: 3 Vote disk in DS8K at each pair of cluster
5.1.5
Steps to configure OCR and Voting for PDC and DR sites: 1) LUNs creation Disks OCR Voting LUNs ID Number LUNs Size 1GB 1GB
2) Preparing Raw Disks for OCR and Voting Disks OCR OCR Voting Voting Voting LUNs Number ID Node 1 hdisk hdisk9 hdisk10 hdisk4 hdisk5 hdisk6 Node 2 hdisk hdisk9 hdisk10 hdisk4 hdisk5 hdisk6
3) Following the steps below to configure Disk devices for OCR and Voting. Identify or configure the required disk devices The disk devices must be shared on all of the cluster nodes. As the root user, enter the following command on any node to identify the device names for the devices that you want to use. # lspv | grep -i none
This command displays information similar to the following for each device that is not configured in a volume group: hdisk17 0009005fb9c23648 None where: - hdisk17 is the device name - 0009005fb9c23648 is the physical volume ID (PVID) If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it (This is for a moment). # chdev -l hdiskn -a pv=yes If you have an existing PVID, then chdev will overwrite the existing PVID, which will cause applications depending on the previous PVID to fail. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node. # lspv | grep -i "0009005fb9c23648" The output from this command should be similar to the following: hdisk18 0009005fb9c23648 None The device name associated with this device on this node is hdisk18, on the primary node is hdisk17 (they are different names) If the device names are the same on all nodes, then enter the following command on all nodes to change owner, group and permissions on the character raw device files for the disk devices: - OCR device: # chown root:oinstall /dev/rhdiskn # chmod 640 /dev/rhdiskn - Voting device: # chown oracle:dba /dev/rhdiskn # chmod 660 /dev/rhdiskn If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the node using a common unused name. To create a new device file for a disk device on all nodes, perform these steps on each node: a) To determine the device major and minor number do the following: # ls -alF /dev/*hdiskn The output from this command is similar to the following: brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn In this case, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number. b) To create a new device file, enter the following command: # mknod /dev/ora_ocr_raw_280m c 24 8192 c) Enter commands similar to the following to change the owner, group and permissions on the character raw device file for the disk: - OCR:
# chown root:oinstall /dev/ora_ocr_raw_280m # chmod 640 /dev/ora_ocr_raw_280m - Voting: # chown oracle:dba /dev/ora_vote_raw_280m # chmod 660 /dev/ora_vote_raw_280m d) Enter following command to verify that you have created the new device file successfully: # ls -alF /dev | grep "24,8192" The output should be similar to the following: brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw-r----- 1 root oinstall 24,8192 Dec 05 2001 /dev/ora_ocr_raw_280m crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute, depending on the type of reserve attribute used by your disks. Follow the steps below to perform this task using hdisk logical names. To determine the reserve setting your disks use, enter command below: # lsattr -E -l hdiskn | grep reserve_ The response is either a reserve_lock setting, or a reserve_policy setting. If the attribute is reserve lock, then ensure the setting is reserve_lock=no If the attribute is reserve policy, then ensure the setting is reserve_policy=no_reserve If necessary, change the setting with chdev command using the following syntax. #chdev -l hdiskn -a [ reserve_lock=no | reserve_policy=no_reserve ] Enter command similar to the following on any node to clear the PVID from each disk device that you want to use. (CLEAR PVID for each disks) # chdev -l hdiskn -a pv=clear Format (Zeroing) and Verify devices Concurrent Read/Write access by running at the same time dd command from each node: At the same time At the same time From node 1 From node 2 dd if=/dev/zero of=/dev/hdiskx dd if=/dev/zero of=/dev/hdiskx bs=8192 count=25000 bs=8192 count=25000 Repeat this operation with Repeat this operation with others disk devices others disk devices 5.1.6 To determine if the system architecture can run the software, enter the following command. # getconf HARDWARE_BITMODE The output of this command should be 64; otherwise you can not install the software on this system. 5.1.7 To determine if the system is started in 64-bit mode.
# bootinfo K The result of this command should be 64, indicating that the 64-bit kernel is enabled.
5.2.2
OS FILESETS requirement
Operaing System Filesets The following operating system filesets are required: bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix50.rte 7.0.0.4 xlC.rte 7.0.0.1 You must have the xlC C/C++ runtime filesets for installation, but you do not require the C/C++ compiler. ASM is required AS we use ASM for Clusterware files and for Oracle Database files. OC Systems PowerAda 5.4d IBM JDK 1.4.2 is installed with this release IBM XLPortran v 10.1 for AIX - GNUfind 4.1 - gdb 6.0 - Gmake 3.80 - Gnutar 1.13 - Perl 5.005_03 + MIME 2.21 - Perl 5.6 + MIME 2.21 - Perl 5.8.3 - Python 2.2 - Unzip 5.4.2 - Zip 2.3 N/A
- Pro*C/C++
- Oracle Call Interface - Oracle C++ Call Interface - Oracle XML developers Kit - GNU compiler Collection To determine whether the required filesets are installed and commited, enter the command similar to the following: # lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat \ bos.perf.libperfstat bos.perf.proctools rsct.basic.rte
5.2.3
5.3.1
Tuning Virtual Memory Manager (VMM) Oracle recommends that you use the vmo command to tune virtual memory using the following values. Parameter value minperm% 3 (default is 20) maxperm% 90 (default is 80)
maxclient% = 90 lru_file_repage strict_maxclient strict_maxperm For example: vmo -p -o minperm%=3 vmo -p -o maxperm%=90 vmo -p -o maxclient%=90 vmo -p -o lru_file_repage=0 vmo -p -o strict_maxclient=1 vmo -p -o strict_maxperm=0
You must restart the system for these changes to take effect.
5.3.2
Configuring Shell Limits To improve the software performance, you must increase the following shell limits:
Shell Limit Item in limits.conf Hard limit
Maximum number of open file descriptors nofile Maximum number of processes available to a maxuproc single user To increase the shell limits: 1. Add the following lines to the /etc/security/limits file: default: fsize = -1 core = -1 cpu = -1 data = 512000 rss = 512000 stack = 512000 nofiles = 2000
65536 16384
2. Enter the following command to list the current setting for the maximum number of process allowed by the Oracle software user: /etc/lsattr -E -l sys0 -a maxuproc If necessary, change the maxuproc setting using the following command: /etc/chdev -l sys0 -a maxuproc = 16384 3. Repeat this procedure on all other nodes in the cluster
5.3.3
Configuring User Process Parameters Verify that the maximum number of processes allowed for each user is set to 2048 or greater: 1. Enter the following command:
# smit chgsys 2. Verify that the value shown for Maximum number of PROCESSES allowed for each user is greater than or equal to 2048. If necessary, edit the existing value. 3. When you have finished making changes, press F10 to exit.
5.3.4
Configuring Network Tuning Parameters Verify that the Network Tuning Parameters shown in the following table are set to the values shown or higher values: Network Tuning Parameter Recommended Value ipqmaxlen 512 rfc1323 1 sb_max 2*655360 tcp_recvspace 65536 tcp_sendspace 65536 udp_recvspace 655360 Note: The recommended value of this parameter is 10 times the value of the udp_sendspace parameter. The value must be less than the value of the sb_max parameter udp_sendspace 65536 production databases, the minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE initialization parameter multiplied by the value of the DB_MULTIBLOCK_READ_COUNT initialization parameter: (DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB
How to view the current setting and change them as required. 1. To check the current values: # no -a | more 2. If you must change the value of any parameter, you must determine whether the system is running in compatibility mode: # lsattr -E -l sys0 -a pre520tune If the result is: pre520tune enable Pre-520 tuning compatibility mode True It means, the system is running in compatibility mode: 3. If the system is running in compatibility mode, follow the steps bellow to change the value: a. Enter command bellow to change value for each parameter # no -o parameter_name=value For example: # no -o udp_recvspace=655360 b. Add entries similar to the following to /etc/rc.net file for each parameter that you changed in previous step: if [ -f /usr/sbin/no ] ; then /usr/sbin/no -o udp_sendspace=65536 /usr/sbin/no -o udp_recvspace=655360 /usr/sbin/no -o tcp_sendspace=65536 /usr/sbin/no -o tcp_recvspace=65536 /usr/sbin/no -o rfc1323=1 /usr/sbin/no -o sb_max=2*655360 /usr/sbin/no -o ipqmaxlen=512 fi 4. If the system is not running in compatibility mode: For ipqmaxlen parameter #no -r -o ipqmaxlen=512 Others no -p -o parameter=value
5.3.5
Increasing Space Block Size Allocation Oracle recommends that you increase the space allocated for ARC/ENV list to 128. #chdev -l sys0 -a ncargs='128'
The Group ID and User ID for oinstall, dba and oracle must be identical over the member nodes of Cluster.
#mkgroup id=500 oinstall #mkgroup id=501 dba Using smit security: ID=500 This user has the following attributes: - oinstall is the PRIMARY GROUP - dba is the SET GROUP Suppose Oracle Clusterware and Oracle RAC are installed on mount point /u01 (/u01 is owner by root), do the following to create Oracle Software owner and Group. Change /u01 to any you have. /u01: Mount point for Oracle Software installation (owner by root) /u01/app: Oracle Base
/u01/app/crs: Oracle CRS Home /u01/app/oracle: Oracle RAC HOME #mkdir /u01/app #chown oracle:oinstall /u01/app #chmod R 775 /u01/app/ #mkdir /u01/app/crs #mkdir /u01/app/oracle
2. Node name and IP identification With DC site Public En0 Node name dcdb1 dcdb2 IP 192.168.10.10 192.168.10.12 Node name dcdb1_vip dcdb2_vip VIP En0 IP 192.168.10.14 192.168.10.16 RAC Interconnect (Private network) En8 Node name IP dcdb1_priv dcdb2_priv 172.16.100.14 172.16.100.16
With DR site Public En20 VIP En20 RAC Interconnect (Private network) En16
IP 10.192.10.10 10.192.10.12
IP 10.192.10.14 10.192.10.16
IP 172.16.100.14 172.16.100.16
/etc/hosts
With DC site
# Public network 192.168.10.10 dcdb1 # node name db1 192.168.10.12 dcdb2 # node name db2 # Oracle RAC interconnect network 172.16.100.14 dcdb1-priv # db1 private IP 172.16.100.16 dcdb2-priv # db2 private IP #Virtual IP for oracle 192.168.10.14 dcdb1-vip # db1 virtual IP 192.168.10.16 dcdb2-vip # db2 virtual IP
With DR site
# Public network 10.192.10.10 drdb1 # drdb1 10.192.10.12 drdb2 # drdb2 # Oracle RAC interconnect network 172.16.100.14 drdb1-priv # bak db1 private IP 172.16.100.16 drdb2-priv # bak db2 private IP #Virtual IP for oracle 10.192.10.14 drdb1-vip # bak db1 virtual IP 10.192.10.16 drdb2-vip # bak db2 virtual IP
5.10
With DC site
CRS Node Node1 Node2 Public dcdb1 dcdb2 IP 192.168.1 0.10 192.168.1 0.12 VIP dcdb1_vip dcdb2_vip IP 192.168.1 0.14 192.168.1 0.16 Private dcdb1_priv dcdb2_priv IP 172.16.100.14 172.16.100.16
With DC site
Identify the shared storage for Clusterware files OCR Location OCR Location Voting Location Voting Location Voting Location /dev/hdisk9 /dev/hdisk10 /dev/hdisk4 /dev/hdisk5 /dev/hdisk6
With DR site
CRS Node
Node1
Public
drdb1
IP
10.192.10.10
VIP
bnspscdb 1_vip
IP
10.192.10.14
Private
drdb1_priv
IP
172.16.100.14
Node2
drdb2
10.192.10.12
drdb2_vi p
10.192.10.16
drdb2_priv
172.16.100.16
Identify the shared storage for Clusterware files OCR Location OCR Location Voting Location Voting Location Voting Location /dev/hdisk9 /dev/hdisk10 /dev/hdisk4 /dev/hdisk5 /dev/hdisk6
LOCK_SGA
True
Summary:
Installation progress:
End of Installation:
Summary:
Install progress
Running scripts
End of Installation:
Running scripts: