You are on page 1of 214

Oracle RAC 12cR1

Ricardo Portilho Proni


ricardo@nervinformatica.com.br
Esta obra est licenciada sob a licena
Creative Commons Atribuio-SemDerivados 3.0 Brasil.
Para ver uma cpia desta licena, visite
http://creativecommons.org/licenses/by-nd/3.0/br/.

Oracle RAC: Conceitos

Por que usar o RAC?


Disponibilidade

Escalabilidade

Custo Total de Propriedade (TCO)

Por que no usar o RAC?


Custo de Equipamentos

Custo de Licenas

Custo de Conhecimento

Complexidade

Escalabilidade

Oracle RAC x Single Instance


1 Database x N Instances
+ Background Processes
+ daemons
OCR
Voting Disk

Evoluo Oracle RAC


Oracle 6.0.35: VAX / VMS
Oracle 7: PCM
Oracle 8i: Cache Fusion I
Oracle 9i: Cache Fusion II, Oracle Cluster Management Services
Oracle 10gR1:
Oracle Cluster Management Services => Cluster Ready Services (CRS)
ASM - Automatic Storage management
FAN - Fast Application Notification
Integrao com Database Services
AWR, ADDM, ASH, Scheduler, Enterprise Manager
Oracle 10gR2: CRS => Oracle Clusterware. New Features incluem: cluvfy, asmcmd.
Oracle 11gR1: Apenas 7 New Features.
Oracle 11gR2: CRS => Grid Infrastrucutre. 32 New Features.
Oracle 12cR1: 33 New Features.

RAC 11gR1 New Features


Enhanced Oracle RAC Monitoring and Diagnostics in Enterprise Manager

Enhanced Oracle Real Application Clusters Configuration Assistants

OCI Runtime Connection Load Balancing

Parallel Execution for Oracle Real Application Clusters

Support for Distributed Transactions in an Oracle RAC Environment

Enhanced Oracle RAC Switchover Support for Logical Standby Databases

Enhanced Oracle RAC Monitoring and Diagnostics in Enterprise Manager

RAC 11gR2 New Features


Configuration Assistants Support New Oracle RAC Features

Enhanced Cluster Verification Utility

Integration of Cluster Verification Utility and Oracle Universal Installer

Cluster Time Service

Oracle Cluster Registry (OCR) Enhancements

Grid Plug and Play (GPnP)

Oracle Restart

Policy-Based Cluster and Capacity Management

Improved Clusterware Resource Modeling

Role-Separated Management

Agent Development Framework

Zero Downtime Patching for Oracle Clusterware and Oracle RAC

Enterprise Manager-Based Clusterware Resource Management

Enterprise Manager Provisioning for Oracle Clusterware and Oracle Real Application
Clusters

Enterprise Manager Support for Grid Plug and Play

Enterprise Manager Support for Oracle Restart

Configuration Assistant Support for Removing Oracle RAC Installations

RAC 11gR2 New Features


Oracle Universal Installer Support for Removing Oracle RAC Installations

Improved Deinstallation Support With Oracle Universal Installer

Downgrading Database Configured With DBControl

Oracle Restart Integration with Oracle Universal Installer

Out-of-Place Oracle Clusterware Upgrade

OUI Support for Out-of-Place Oracle Clusterware Upgrade

Server Control (SRVCTL) Enhancements

Server Control (SRVCTL) Enhancements to Support Grid Plug and Play

SRVCTL Support for Single-Instance Database in a Cluster

Universal Connection Pool (UCP) Integration with Oracle Data Guard

UCP Integration With Oracle Real Application Clusters

Universal Connection Pool (UCP) for JDBC

Java API for Oracle RAC FAN High Availability Events

EMCA Supports New Oracle RAC Configuration for Enterprise Manager

Global Oracle RAC ASH Report + ADDM Backwards Compatibility

RAC 12cR1 New Features


Oracle Flex Cluster

SRVCTL Support for Oracle Flex Cluster Implementations

Policy-Based Cluster Management and Administration

What-If Command Evaluation

Shared Grid Naming Service (GNS)

Online Resource Attribute Modification

Grid Infrastructure Script Automation for Installation and Upgrade

Multipurpose Cluster Installation Support

Support for IPv6 Based IP Addresses for Oracle RAC Client Connectivity

Message Forwarding on Oracle RAC

Sharded Queues for Performance and Scalability

Oracle Grid Infrastructure Rolling Migration for One-Off Patches

10

10

RAC 12cR1 New Features


Oracle Flex ASM

Oracle ASM Shared Password File in a Disk Group

Oracle ASM Rebalance Enhancements

Oracle ASM Disk Resync Enhancements

Oracle ASM chown, chgrp, chmod and Open Files Support

Oracle ASM Support ALTER DISKGROUP REPLACE USER

Oracle ASM File Access Control on Windows

Oracle ASM Disk Scrubbing

Oracle Cluster Registry Backup in ASM Disk Group Support

Enterprise Manager Support for Oracle ASM Features

Oracle ACFS Support for All Oracle Database Files

Oracle ACFS and Highly Available NFS

Oracle ACFS Snapshots Enhancements

Oracle ACFS Replication Integration with Oracle ACFS Security and Encryption

Oracle ACFS Security and Encryption Features

Oracle ACFS File Tags for Grid Homes

Oracle ACFS Plug-in APIs

Oracle ACFS Replication and Tagging on AIX

Oracle ACFS Replication and Tagging on Solaris

Oracle Audit Vault Support for Oracle ACFS Security and Encryption

Enterprise Manager Support for Oracle ACFS New Features

11

11

Hardware

12

12

Hardware

13

13

Sistema Operacional

14

14

Sistemas Operacionais homologados


Linux x64

Oracle Linux 7 / Red Hat Enterprise Linux 7

Oracle Linux 6 / Red Hat Enterprise Linux 6

Oracle Linux 5 / Red Hat Enterprise Linux 5

SUSE Linux Enterprise Server 11


Linux on System z

Red Hat Enterprise Linux 6

Red Hat Enterprise Linux 5

SUSE 11
Unix

Oracle Solaris 11 (SPARC) / Oracle Solaris 10 (SPARC)

Oracle Solaris 11 (x64) / Oracle Solaris 10 (x64)

HP-UX 11iV3

AIX 7.1 / AIX 6.1


Windows (x64)
Windows Server 2008 SP2 - Standard, Enterprise, DataCenter, Web.
Windows Server 2008 R2 - Foundation, Standard, Enterprise, DataCenter, Web.
Windows Server 2012 - Standard, Datacenter, Essentials, Foundation.
Windows Server 2012 R2 - Standard, Datacenter, Essentials, Foundation

15

15

Lab 1 Instalao OEL 6


Hands On !

16

16

Lab 1.1: Instalao OEL 6


Nas mquinas nerv01 e nerv02, instale o OEL.
- 1a tela: Install or upgrade an existing system
- 2a tela: Skip
- 3a tela: Next
- 4a tela: English (English), Next
- 5a tela: Brazilian ABNT2, Next
- 6a tela: Basic Storage Devices, Next
- 7a tela: Fresh Installation, Next
- 8a tela: nerv01.localdomain, Next
- 9a tela: America/Sao Paulo, Next
- 10a tela: Nerv2015, Nerv2015, Next
- 11a tela: Create Custom Layout, Next

17

17

Lab 1.2: Instalao OEL 6


- 12a tela: Crie as parties como abaixo, e em seguida, Next:
sda1
1024 MB
/boot
sda2
100000 MB
/
sda3
20000 MB
/home
sda5
16384 MB
swap
sda6
10000 MB
/var
sda7
10000 MB
/tmp
sda8
Espao restante /u01
- 13a tela: Format
- 14a tela: Write changes to disk
- 15a tela: Next
- 16a tela: Desktop
- 17a tela: Reboot
- Retire o DVD.
- Aps o Boot: Forward, Yes, I agree to the License Agrrement, Forward, No, I
prefer to register at a later time, Forward No thanks, I'll connect later, Forward,
Forward, Yes, Forward, Finish, Yes, OK.

18

1818

Lab 2 Configurao DNS


Hands On !

19

19

Lab 2.1: Instalao OEL 6


Na mquina nerv09, instale os pacotes necessrios para o DNS.
# yum -y install bind bind-utils
Na mquina nerv09, deixe APENAS as seguintes linhas no arquivo /etc/named.conf.
options {
listen-on port 53 { 127.0.0.1; 192.168.0.201; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
// query-source address * port 53;
};
zone "." in {
type hint;
file "/dev/null";
};
zone "localdomain." IN {
type master;
file "localdomain.zone";
allow-update { none; };
};

20

20

Lab 2.2: Instalao OEL 6


Nas mquinas nerv09, deixe APENAS as seguintes linhas no arquivo
/var/named/localdomain.zone.
$TTL
@

86400
IN SOA localhost root.localhost (
42
; serial (d. adams)
3H
; refresh
15M
; retry
1W
; expiry
1D )
; minimum
IN NS
localhost
localhost
IN A
127.0.0.1
nerv01
IN A 192.168.0.101
nerv02
IN A 192.168.0.102
nerv01-vip
IN A 192.168.0.111
nerv02-vip
IN A 192.168.0.112
rac01-scan
IN A 192.168.0.151
rac01-scan
IN A 192.168.0.152
rac01-scan
IN A 192.168.0.153

21

21

Lab 2.3: Instalao OEL 6


Na mquina nerv09, deixe APENAS as seguintes linhas no arquivo
/var/named/0.168.192.in-addr.arpa.
$ORIGIN 0.168.192.in-addr.arpa.
$TTL 1H
@
IN
SOA nerv09.localdomain. root.nerv09.localdomain. (
2
3H
1H
1W
1H )
0.168.192.in-addr.arpa.
IN NS
nerv09.localdomain.
101
102
111
112
151
152
153

IN PTR
IN PTR
IN PTR
IN PTR
IN PTR
IN PTR
IN PTR

nerv01.localdomain.
nerv02.localdomain.
nerv01-vip.localdomain.
nerv02-vip.localdomain.
rac01-scan.localdomain.
rac01-scan.localdomain.
rac01-scan.localdomain.

22

22

Lab 2.4: Instalao OEL 6


Na mquina nerv09, inicie o DNS Server, e o habilite para o incio automtico.
# service named start
# chkconfig named on
Na mquina nerv09, pare o firewall, e o desabilite para o incio automtico.
# service iptables stop
# service ip6tables stop
# chkconfig iptables off
# chkconfig ip6tables off

23

23

Lab 3 Configurao OEL 6


Hands On !

24

24

Lab 3.1 Configurao OEL 6


Nas mquinas nerv01 e nerv02, configure as placas de rede pplica e privada.

25

25

Lab 3.2 Configurao OEL 6


Nas mquinas nerv01 e nerv02, atualize o sistema operacional e execute a instalao dos
pr-requisitos.
# service network restart
# yum -y update
# yum -y install oracle-rdbms-server-12cR1-preinstall
# yum -y install oracleasm-support
# yum -y install unzip wget iscsi-initiator-utils java-1.8.0-openjdk parted
# yum -y install unixODBC unixODBC.i686 unixODBC-devel unixODBC-devel.i686
# wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm
# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm
Nas mquinas nerv01 e nerv02, remova o DNS 8.8.8.8 da placa de rede eth0.
Nas mquinas nerv01 e nerv02, altere a seguinte linha no arquivo /etc/fstab.
tmpfs
/dev/shm
tmpfs defaults,size=4g
00

26

26

Lab 3.3 Configurao OEL 6


Nas mquinas nerv01 e nerv02, ACRESCENTAR ao arquivo /etc/hosts:
# Public
192.168.0.101
nerv01.localdomain
nerv01
192.168.0.102
nerv02.localdomain
nerv02
# Private
192.168.1.101
nerv01-priv.localdomain nerv01-priv
192.168.1.102
nerv02-priv.localdomain nerv02-priv
# Virtual
192.168.0.111
nerv01-vip.localdomain
nerv01-vip
192.168.0.112
nerv02-vip.localdomain
nerv02-vip
# Storage
192.168.0.201
nerv09.localdomain
nerv09

27

27

Lab 3.4 Configurao OEL 6


Nas mquinas nerv01 e nerv02, executar os comandos abaixo.
# groupadd oper
# groupadd asmadmin
# groupadd asmdba
# groupadd asmoper
# usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle
# mkdir -p /u01/app/12.1.0.2/grid
# mkdir -p /u01/app/oracle/product/12.1.0.2/db_1
# chown -R oracle:oinstall /u01
# chmod -R 775 /u01
# passwd oracle (Coloque como senha do usurio oracle: Nerv2015)

28

28

Lab 3.5 Configurao OEL 6


Nas mquinas nerv01 e nerv02, altere o SELinux de enforcing para
permissive.
# vi /etc/selinux/config
Nas mquinas nerv01 e nerv02, desabilite o firewall.
# chkconfig iptables off
# chkconfig ip6tables off
Nas mquinas nerv01 e nerv02, desabilite o NTP.
# mv /etc/ntp.conf /etc/ntp.conf.org
# reboot

29

29

Lab 3.6 Configurao OEL 6


Nas mquinas nerv01 e nerv02 , com o usurio oracle, ACRESCENTAR NO FINAL do arquivo
/home/oracle/.bash_profile as linhas abaixo.
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=nerv01.localdomain
export ORACLE_UNQNAME=ORCL
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0.2/db_1
export GRID_HOME=/u01/app/12.1.0.2/grid
export CRS_HOME=$GRID_HOME
export ORACLE_SID=ORCL1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

30

30

Shared Storage

31

31

Opes de Shared Storage

32

32

Opes de Shared Storage

33

33

Lab 4 Storage
Hands On !

34

34

Lab 4.1 Storage


Na mquinas nerv09, crie 3 parties de 5GB, e 4 de 10GB.
Na mquina nerv09, configure o iSCSI server.
# yum -y install scsi-target-utils
# cat /etc/tgt/targets.conf
<target iqn.2010-10.com.nervinformatica:storage.asm01-01>
backing-store /dev/sda5
initiator-address 192.168.0.101
initiator-address 192.168.0.102
</target>
<target iqn.2010-10.com.nervinformatica:storage.asm01-02>
backing-store /dev/sda6
initiator-address 192.168.0.101
initiator-address 192.168.0.102
</target>
...
# service tgtd start
# chkconfig tgtd on

35

35

Lab 4.2 Storage (ASM)


Nas mquinas nerv01 e nerv02, ative o pacote iSCSI Initiator.
# chkconfig iscsid on
Nas mquinas nerv01 e nerv02, verifique os Discos exportados no Storage.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l
Nas mquinas nerv01 e nerv02, deixe APENAS os novos discos no arquivo
/etc/iscsi/initiatorname.iscsi.
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-01
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-02
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-03
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-04
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-05
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-06
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-07

36

36

Lab 4.3 Storage (ASM)


Nas mquinas nerv01 e nerv02 verifique se os discos foram configurados localmente.
# fdisk -l
Na mquina nerv01, particione os novos discos.
# fdisk /dev/sdb
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
# fdisk /dev/sdc
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
...

37

37

Lab 4.4 Storage (ASM)


Na mquina nerv02, execute a deteco dos novos discos.
# partprobe /dev/sdb
# partprobe /dev/sdc
# partprobe /dev/sdd
# partprobe /dev/sde
# partprobe /dev/sdf
# partprobe /dev/sdg
# partprobe /dev/sdh

38

38

Lab 4.5 Storage (ASM)


Nas mquinas nerv01 e nerv02, configure a ASMLib.
# /etc/init.d/oracleasm configure
oracle <enter>
asmadmin <enter>
y <enter>
y <enter>
# /etc/init.d/oracleasm status
Na mquina nerv01, crie os discos do ASM.
# /etc/init.d/oracleasm createdisk DISK01 /dev/sdb1
# /etc/init.d/oracleasm createdisk DISK02 /dev/sdc1
# /etc/init.d/oracleasm createdisk DISK03 /dev/sdd1
# /etc/init.d/oracleasm createdisk DISK04 /dev/sde1
# /etc/init.d/oracleasm createdisk DISK05 /dev/sdf1
# /etc/init.d/oracleasm createdisk DISK06 /dev/sdg1
# /etc/init.d/oracleasm createdisk DISK07 /dev/sdh1
Na mquina nerv02, execute a deteco dos discos criados.
# /etc/init.d/oracleasm scandisks

39

39

Lab 4.6 Storage (ASM)


Nas mquinas nerv01 e nerv02, verifique se os discos esto corretos.
# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK01
# /etc/init.d/oracleasm querydisk -v -p DISK02
# /etc/init.d/oracleasm querydisk -v -p DISK03
# /etc/init.d/oracleasm querydisk -v -p DISK04
# /etc/init.d/oracleasm querydisk -v -p DISK05
# /etc/init.d/oracleasm querydisk -v -p DISK06
# /etc/init.d/oracleasm querydisk -v -p DISK07
Nas mquinas nerv01 e nerv02, verifique se os discos esto corretos.
# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle asmadmin 8, 17 Jan 2 13:01 DISK01
brw-rw----. 1 oracle asmadmin 8, 33 Jan 2 13:01 DISK02
brw-rw----. 1 oracle asmadmin 8, 49 Jan 2 13:01 DISK03
brw-rw----. 1 oracle asmadmin 8, 65 Jan 2 13:01 DISK04
brw-rw----. 1 oracle asmadmin 8, 81 Jan 2 13:01 DISK05
brw-rw----. 1 oracle asmadmin 8, 97 Jan 2 13:01 DISK06
brw-rw----. 1 oracle asmadmin 8, 113 Jan 2 13:01 DISK07

40

40

Oracle Grid Infrastructure

41

41

Componentes
- Oracle Cluster Registry
- Voting Disk (Quorum Disk)
- Grid Infrastructure Management Repository (MGMTDB)
- VIPs e SCAN
- Utilitrios: crsctl, srvctl
- Daemons: ohasd, crsd, evmd, ons, evmlogger, ologgerd, cssdmonitor, cssdagent,
ocssd, octssd, osysmond, mdnsd, gpnpd, gipcd, orarootagent, oraagent, scriptagent

42

42

Lab 5 - Grid Infraestructure


Hands On !

43

43

Lab 5.1 Grid Infrastructure


Na mquina nerv01, com o usurio oracle, descompacte e execute o instalador
do Grid Infrastructure.
$ cd /home/oracle
$ unzip -q linuxamd64_12102_grid_1of2.zip
$ unzip -q linuxamd64_12102_grid_2of2.zip
Nas mquinas nerv01 e nerv02, instale o Cluster Verification Utility.
# rpm -ivh /home/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm
Na mquina nerv01, inicie a instalao do Grid Infrastructure.
$ cd grid
$ ./runInstaller

44 44

Lab 5.2 Grid Infrastructure

45 45

Lab 5.3 Grid Infrastructure

46 46

Lab 5.4 Grid Infrastructure

47 47

Lab 5.5 Grid Infrastructure

48 48

Lab 5.6 Grid Infrastructure

49 49

Lab 5.7 Grid Infrastructure

50 50

Lab 5.8 Grid Infrastructure

51 51

Lab 5.9 Grid Infrastructure

52 52

Lab 5.10 Grid Infrastructure

53 53

Lab 5.11 Grid Infrastructure

54 54

Lab 5.12 Grid Infrastructure

55 55

Lab 5.13 Grid Infrastructure

56 56

Lab 5.14 Grid Infrastructure

57 57

Lab 5.15 Grid Infrastructure

58 58

Lab 5.16 Grid Infrastructure

59 59

Lab 5.17 Grid Infrastructure

60 60

Lab 5.18 Grid Infrastructure

61 61

Lab 5.19 Grid Infrastructure

62 62

Lab 5.20 Grid Infrastructure

63 63

Lab 5.21 Grid Infrastructure

64 64

Lab 5.22 Grid Infrastructure

65 65

Lab 5.23 Grid Infrastructure

66 66

Lab 5.24 Grid Infrastructure

67 67

Lab 5.26 Grid Infrastructure

68 68

Lab 5.27 Grid Infrastructure

69 69

Lab 5.28 Grid Infrastructure

70 70

Lab 5.29 Grid Infrastructure

71 71

Lab 5.30 Grid Infrastructure

72 72

Lab 5.31 Grid Infrastructure

73 73

Lab 6 Oracle Database Software


Hands On !

74

74

Lab 6.1 Oracle Database Software


Na mquina nerv01, com o usurio oracle, descompacte e execute o instalador
do Oracle Database Software.
$ cd /home/oracle
$ unzip -q linuxamd64_12102_database_1of2.zip
$ unzip -q linuxamd64_12102_database_2of2.zip
$ cd database
$ ./runInstaller

75 75

Lab 6.2 Oracle Database Software

76 76

Lab 6.3 Oracle Database Software

77 77

Lab 6.4 Oracle Database Software

78 78

Lab 6.5 Oracle Database Software

79 79

Lab 6.6 Oracle Database Software

80 80

Lab 6.7 Oracle Database Software

81 81

Lab 6.8 Oracle Database Software

82 82

Lab 6.9 Oracle Database Software

83 83

Lab 6.10 Oracle Database Software

84 84

Lab 6.11 Oracle Database Software

85 85

Lab 6.12 Oracle Database Software

86 86

Lab 6.13 Oracle Database Software

87 87

Lab 6.14 Oracle Database Software

88 88

Lab 6.15 Oracle Database Software

89 89

Lab 6.16 Oracle Database Software

90 90

Lab 6.17 Oracle Database Software

91 91

Lab 6.18 Oracle Database Software

92 92

Oracle Database

93

93

RAC Database
Background Process
ACMS: Atomic Controlfile to Memory Service
GTX0-j: Global Transaction Process
LMON: Global Enqueue Service Monitor
LMD: Global Enqueue Service Daemon
LMS: Global Cache Service Process
LCK0: Instance Enqueue Process
RMSn: Oracle RAC Management Processes
RSMN: Remote Slave Monitor

PFILE / SPFILE (1x)

Control Files (1x)

Online Redo Log Threads (x Nodes)

UNDO Tablespaces / Datafiles (x Nodes)

Datafiles (1x)

94

94

Lab 7.1 Oracle Database


Para efetuar logon na Instance ASM1, use o SQLPlus.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK 'ORCL:DISK04',
'ORCL:DISK05';
SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:DISK06',
'ORCL:DISK07';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
$ srvctl start diskgroup -g DATA -n nerv02
$ srvctl enable diskgroup -g DATA -n nerv02
$ srvctl start diskgroup -g FRA -n nerv02
$ srvctl enable diskgroup -g FRA -n nerv02

95

95

Lab 7.2 Oracle Database

96

96

Lab 7.3 Oracle Database

97

97

Lab 7.4 Oracle Database

98

98

Lab 7.5 Oracle Database

99

99

Lab 7.6 Oracle Database

100

100

Lab 7.7 Oracle Database

101

101

Lab 7.8 Oracle Database

102

102

Lab 7.9 Oracle Database

103

103

Lab 7.10 Oracle Database

104

104

Lab 7.11 Oracle Database

105

105

Lab 7.12 Oracle Database

106

106

Lab 7.13 Oracle Database

107

107

Lab 7.14 Oracle Database

108

108

Lab 7.15 Oracle Database

109

109

Lab 7.16 Oracle Database

110

110

Lab 7.17 Oracle Database


Para efetuar logon na Instance ASM1, use o SQLPlus.
$ export ORACLE_SID=+ASM1
$ sqlplus / as SYSDBA

Por que no funcionou?


Verifique os discos existentes, e espao disponvel.
SQL> SELECT NAME, TOTAL_MB, FREE_MB, HOT_USED_MB, COLD_USED_MB FROM
V$ASM_DISK;
SQL> SELECT NAME, TOTAL_MB, FREE_MB, HOT_USED_MB, COLD_USED_MB FROM
V$ASM_DISKGROUP;

Crie uma TABLESPACE no ASM.


SQL> CREATE TABLESPACE nerv DATAFILE '+DATA';

Deve ser feito na Instance ASM ou na Instance Database?


Verifique o novo DATAFILE criado, e os j existentes.
SQL> SELECT FILE_NAME, BYTES, MAXBYTES, AUTOEXTENSIBLE, INCREMENT_BY FROM
DBA_DATA_FILES

111

111

Lab 7.18 Oracle Database


Execute o asmcmd, e navegue pelos diretrios do Disk Group.
$ asmcmd -p
ASMCMD [+] > help
ASMCMD [+] > lsdg
Pelo asmcmd, copie um DATAFILE do ASM para o /home/oracle de uma
mquina do RAC.
Execute um Backup do Banco de Dados.
$ rman target /
RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
Por que no funcionou?

112

112

Administrao

113

113

Comandos depreciados no 11gR2

114

114

Comandos depreciados no 11gR2

115

115

Comandos depreciados no 11gR2

116

116

Comandos depreciados no 11gR2

117

117

Comandos depreciados no 11gR2

118

118

Comandos depreciados no 12cR1

119

119

Comandos depreciados no 12cR1

120

120

Comandos depreciados no 12cR1

121

121

Dificuldades
$GRID_HOME x $ORACLE_HOME

oracle X root

122

122

Binrios do GRID_HOME
Adicionar $GRID_HOME/bin no $PATH, no .bash_profile
$ crsctl status res -t
OU
$ . oraenv
ORACLE_SID = [ORCL1] ? +ASM1 <enter>
OU
$ cd /u01/app/12.1.0.2/grid/bin/
./crsctl status res -t
OU
$ /u01/app/12.1.0.2/grid/bin/crsctl status res -t

123

123

Daemons

124

124

Daemons

125

125

Daemons

126

126

Daemons

127

127

Cluster Startup

128

128

Logs
11gR2
$GRID_HOME/log/<node>/
$GRID_HOME/log/<node>/alert<node>.log
12cR1
$ORACLE_BASE/diag/crs/<node>/crs
$ORACLE_BASE/diag/crs/<node>/crs/trace/alert.log

129

129

LAB 8 Daemons
Hands On !

130

130

Lab 8.1 Daemons


Acompanhe a execuo dos daemons via top.
Desligue a mquina nerv01.
Veja o que acontece no Alert Log do nerv02 enquanto o nerv01 desligado.
$ tail -f /u01/app/oracle/diag/crs/nerv02/crs/trace/alert.log
Ligue a mquina nerv01.
Veja o que acontece no Alert Log do nerv02 enquanto o nerv01 ligado.
$ tail -f /u01/app/oracle/diag/crs/nerv02/crs/trace/alert.log
Familiarize-se com o diretrio de Logs.
Verifique o estado dos recursos.
$ /u01/app/12.1.0.2/grid/bin/crsctl status res -t

131

131

Lab 8.2 Daemons


Continue acompanhando os Alert Logs das duas mquinas.
Desconecte o cabo da rede do Interconnect, apenas de um n.
O que aconteceu?
Desconecte o cabo da rede do Storage, apenas de um n.
O que aconteceu?
Verifique e altere parmetros de timeout para o mnimo possvel.
# /u01/app/12.1.0.2/grid/bin/crsctl get css reboottime
# /u01/app/12.1.0.2/grid/bin/crsctl get css misscount
# /u01/app/12.1.0.2/grid/bin/crsctl get css disktimeout
# /u01/app/12.1.0.2/grid/bin/crsctl set css reboottime 1
# /u01/app/12.1.0.2/grid/bin/crsctl set css misscount 2
# /u01/app/12.1.0.2/grid/bin/crsctl set css disktimeout 3

132

Teste de Carga

133

133

Teste de Carga
Tipos

TPC-C: OLTP (Rede Varejista)

TPC-E: OLTP (Telefonia)

TPC-H: Data Warehouse


Ferramentas

Hammerora

Swingbench

134

134

Teste de Carga

135

135

LAB 9 Teste de Carga


Hands On !

136

136

Lab 9.1 Teste de Carga


Copie o swingbench para a mquina nerv01, como usurio oracle.
Crie uma TABLESPACE com o nome SOE.
Descompacte o swingbench.zip.
$ cd /home/oracle
$ unzip -q swingbench25956.zip
$ cd swingbench/bin
Execute a criao do SCHEMA do teste de carga:
$ ./oewizard
Execute o teste de carga:
$ ./charbench -cs //rac01-scan/ORCL -uc 10

137

srvctl

138

138

srvctl
A partir de qualquer Node, controla todos.

Deve ser utilizado com o usurio oracle ou com o owner do GRID_HOME.

Deve ser utilizado o srvctl do GRID_HOME.

Comando preferencial para iniciar e parar recursos do RAC.

Administra Database, Instances, ASM, Listeners e Services.

Um recurso pode ser iniciado, parado, habilitado, ou desabilitado.

139

139

LAB 10 srvctl
Hands On !

140

140

Lab 10.1 srvctl


Execute srvctl -h e entenda as opes.

Pare o Listener de apenas um Node.

Pare a Instance de apenas um Node.

Inicie novamente o Listener que est parado.

Inicie novamente a Instance que est parada.

Pare o Database, e o inicie novamente.

Pare uma Intance com a opo ABORT.

Inicie uma Instance com a opo MOUNT.

Mate uma Instance (kill no pmon) de um dos ns, e veja o que acontece.

141

141

Lab 10.2 srvctl


Coloque o banco em modo ARCHIVELOG e execute um backup.
$ srvctl stop database -d ORCL
$ srvctl start instance -d ORCL -i ORCL1 -o mount
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER SYSTEM SET db_recovery_file_dest='+FRA';
SQL> ALTER SYSTEM SET db_recovery_file_dest_size=10G;
SQL> ALTER DATABASE OPEN;
$ srvctl start instance -d ORCL -i ORCL2
RMAN> BACKUP DATABASE;

142

142

crsctl

143

143

crsctl
A partir de qualquer Node, controla todos.

Deve ser utilizado com o usurio root.

Deve ser utilizado do GRID_HOME.

Principal comando de administrao do Grid.

Um recurso pode ser iniciado, parado, habilitado, ou desabilitado.

Necessrio para verificao e alterao de parmetros.

Necessrio para Troubleshooting e Debug.

144

144

LAB 11 crsctl
Hands On !

145

145

Lab 11.1 crsctl


Verifique as opes do crsctl, digitando crsctl, sem opes.
Verifique o status dos Daemons:
# /u01/app/12.1.0.2/grid/bin/crsctl check css
# /u01/app/12.1.0.2/grid/bin/crsctl check evm
# /u01/app/12.1.0.2/grid/bin/crsctl check crs
# /u01/app/12.1.0.2/grid/bin/crsctl check ctss
# /u01/app/12.1.0.2/grid/bin/crsctl check cluster
# /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all
Verifique a verso instalada e ativa.
# /u01/app/12.1.0.2/grid/bin/crsctl query crs activeversion
# /u01/app/12.1.0.2/grid/bin/crsctl query crs softwareversion
Liste todos os parmetros de um recurso.
# /u01/app/12.1.0.2/grid/bin/crsctl status res ora.orcl.db -f

146

Lab 11.2 crsctl


Liste os mdulos do Cluster.
# /u01/app/12.1.0.2/grid/bin/crsctl lsmodules crs
# /u01/app/12.1.0.2/grid/bin/crsctl lsmodules css
# /u01/app/12.1.0.2/grid/bin/crsctl lsmodules evm
Coloque um dos mdulos informados pelo comando anterior (lsmodules),
e coloque ele em modo Debug.
# /u01/app/12.1.0.2/grid/bin/crsctl set log crs CRSCOMM:2
Pare todo o Node.
# /u01/app/12.1.0.2/grid/bin/crsctl stop cluster
Pare o outro Node.
# /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -n nerv02
Inicie todo o Cluster.
# /u01/app/12.1.0.2/grid/bin/crsctl start cluster -all

147

Voting Disks

148

148

Voting Disk
o centro do ping dos Nodes.

Pode ter N mirrors.

Pode ser alterado de qualquer Node.

Backups do Voting Disk so manuais.

Todas operaes do Voting Disk devem ser executadas como root.

Deve ser feito backup aps Adio ou Remoo de Nodes (<11gR2).

Com base nas informaes nele, o Clusterware decide que Node faz parte do
Cluster (Election / Eviction / Split Brain).

149

149

LAB 11 Voting Disk


Hands On !

150

150

Lab 12.1 Voting Disk


Na mquinas nerv09, crie 3 parties de 1GB (para o VD), e 3 de 2GB (para o
OCR).
Na mquina nerv09, reconfigure o arquivo /etc/tgt/targets.conf iSCSI server com
as novas 6 parties.
<target iqn.2010-10.com.nervinformatica:storage.asm01-08>
backing-store /dev/sda33
initiator-address 192.168.0.101
initiator-address 192.168.0.102
</target>
...
# service tgtd reload

151

151

Lab 12.2 Voting Disk


Nas mquinas nerv01 e nerv02, verifique os Discos exportados no Storage.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l
Nas mquinas nerv01 e nerv02, adicione os novos discos no arquivo
/etc/iscsi/initiatorname.iscsi.
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-08
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-09
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-10
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-11
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-12
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-13

152

152

Lab 12.3 Voting Disk


Nas mquinas nerv01 e nerv02 verifique se os discos foram configurados
localmente.
# fdisk -l
Na mquina nerv01, particione os novos discos.
# fdisk /dev/sdi
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
...

153

153

Lab 12.4 Voting Disk


Na mquina nerv02, execute a deteco dos novos discos.
# partprobe /dev/sdi
# partprobe /dev/sdj
# partprobe /dev/sdk
# partprobe /dev/sdl
# partprobe /dev/sdm
# partprobe /dev/sdn
Na mquina nerv01, crie os discos do ASM.
# /etc/init.d/oracleasm createdisk DISK08 /dev/sdi1
# /etc/init.d/oracleasm createdisk DISK09 /dev/sdj1
# /etc/init.d/oracleasm createdisk DISK10 /dev/sdk1
# /etc/init.d/oracleasm createdisk DISK11 /dev/sdl1
# /etc/init.d/oracleasm createdisk DISK12 /dev/sdm1
# /etc/init.d/oracleasm createdisk DISK13 /dev/sdn1
Na mquina nerv02, execute a deteco dos discos criados.
# /etc/init.d/oracleasm scandisks

154

154

Lab 12.5 Voting Disk


Nas mquinas nerv01 e nerv02, verifique se os discos esto corretos.
# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK08
# /etc/init.d/oracleasm querydisk -v -p DISK09
# /etc/init.d/oracleasm querydisk -v -p DISK10
# /etc/init.d/oracleasm querydisk -v -p DISK11
# /etc/init.d/oracleasm querydisk -v -p DISK12
# /etc/init.d/oracleasm querydisk -v -p DISK13
Nas mquinas nerv01 e nerv02, verifique se os discos esto corretos.
# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle asmadmin 8, 17 Jan 2 13:01 DISK08
brw-rw----. 1 oracle asmadmin 8, 33 Jan 2 13:01 DISK09
brw-rw----. 1 oracle asmadmin 8, 49 Jan 2 13:01 DISK10
brw-rw----. 1 oracle asmadmin 8, 65 Jan 2 13:01 DISK11
brw-rw----. 1 oracle asmadmin 8, 81 Jan 2 13:01 DISK12
brw-rw----. 1 oracle asmadmin 8, 97 Jan 2 13:01 DISK13

155

155

Lab 12.6 Voting Disk


Na mquina nerv01, crie os novos DISK GROUPS.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP VD NORMAL REDUNDANCY DISK 'ORCL:DISK08' ,
'ORCL:DISK09' , 'ORCL:DISK10';
SQL> CREATE DISKGROUP OCR NORMAL REDUNDANCY DISK 'ORCL:DISK11' ,
'ORCL:DISK12' , 'ORCL:DISK13';
SQL> ALTER DISKGROUP OCR SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';
SQL> ALTER DISKGROUP OCR SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';
SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
$ srvctl start diskgroup -g OCR -n nerv02
$ srvctl enable diskgroup -g OCR -n nerv02
$ srvctl start diskgroup -g VD -n nerv02
$ srvctl enable diskgroup -g VD -n nerv02

156

156

Lab 12.7 Voting Disk


Verifique o status do VOTING DISK.
# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk
Adicione um MIRROR ao VOTING DISK.
# /u01/app/12.1.0.2/grid/bin/crsctl add css votedisk +FRA
O que aconteceu?
# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +FRA
O que aconteceu?
Altere a localizao do VOTING DISK.
# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +VD
# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk

157

Lab 12.8 Voting Disk


Na mquina nerv09, simule uma falha nos discos dos VOTING DISKs.
# dd if=/dev/zero of=/dev/sda33 bs=512 count=1000000
# dd if=/dev/zero of=/dev/sda34 bs=512 count=1000000
# dd if=/dev/zero of=/dev/sda35 bs=512 count=1000000
O que aconteceu?
Nas mquinas nerv01 e nerv02, verifique o estado do Cluster.
# tail -f /u01/app/oracle/diag/crs/nerv01/crs/trace/alert.log
# /u01/app/12.1.0.2/grid/bin/crsctl status res -t
# /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all
Nas mquinas nerv01 e nerv02, desabilite o incio automtico do CRS, e reinicie.
# /u01/app/12.1.0.2/grid/bin/crsctl disable crs
# reboot
Na mquina nerv01, inicie o CRS em modo exclusivo, e altere o VOTING DISK.
# /u01/app/12.1.0.2/grid/bin/crsctl start crs -excl
# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +CONFIG
Nas mquinas nerv01 e nerv02, habilite o incio automtico do CRS, e reinicie.
# /u01/app/12.1.0.2/grid/bin/crsctl enable crs
# reboot

158

Lab 12.9 Voting Disk


Na mquina nerv01, particione novamente os discos dos VOTING DISKs.
# fdisk /dev/sd?
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
...
Na mquina nerv02, execute novamente a deteco dos novos discos.
# partprobe /dev/sd?
# partprobe /dev/sd?
# partprobe /dev/sd?
Na mquina nerv01, crie novamente os discos dos VOTING DISKs.
# /etc/init.d/oracleasm createdisk DISK08 /dev/sd?1
# /etc/init.d/oracleasm createdisk DISK09 /dev/sd?1
# /etc/init.d/oracleasm createdisk DISK10 /dev/sd?1
Na mquina nerv02, execute a deteco dos discos criados.
# /etc/init.d/oracleasm scandisks

159

Lab 12.10 Voting Disk


Na mquina nerv01, recrie o DISK GROUP dos VOTING DISKs.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP VD NORMAL REDUNDANCY DISK 'ORCL:DISK08' ,
'ORCL:DISK09' , 'ORCL:DISK10';
SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';
SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
$ srvctl start diskgroup -g VD -n nerv02
Na mquina nerv01, altere a localizao do VOTING DISK.
# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +VD
# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk

160

160

OCR

161

161

OCR Oracle Cluster Registry


o centro das informaes do RAC.

Deve estar em Storage compartilhado por todos Nodes.

Pode ter at 4 mirrors.

Ferramentas do OCR: ocrconfig, ocrcheck, ocrdump.

Ferramentas do OCR devem ser utilizadas como root.

Pode ser alterado de qualquer Node.

Backups do OCR so executados automaticamente.

Backups armazenados: 1 semanal, 1 dirio, e 1 a cada 4 horas.

Podem ser executados backups fsicos e lgicos.

162

162

LAB 12 OCR
Hands On !

163

163

Lab 13.1 OCR


Execute o ocrdump, e analise o contedo do dump (OCRDUMPFILE).
# /u01/app/12.1.0.2/grid/bin/ocrdump
# file OCRDUMPFILE
# grep orcl OCRDUMPFILE
Execute o ocrcheck, e verifique o resultado.
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# ls -lh /u01/app/oracle/diag/crs/nerv01/crs/trace/ocrcheck*
Verifique o OCR pelo Cluster Verification Utility.
# /u01/app/12.1.0.2/grid/bin/cluvfy comp ocr -n nerv01,nerv02
Por que no funciona?

164

Lab 13.2 OCR


Na mquina nerv01, altere o VOTING DISKs dos OCRs.
# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +CONFIG
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +DATA
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +FRA
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +DATA
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +FRA
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +CONFIG
Por que no funciona?
# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +OCR
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +CONFIG
# /u01/app/12.1.0.2/grid/bin/ocrcheck

165

Lab 13.3 OCR


Na mquina nerv01, verifique os backups fsicos existentes do OCR.
# /u01/app/12.1.0.2/grid/bin/ocrconfig -showbackup
# /u01/app/12.1.0.2/grid/bin/ocrconfig -manualbackup
# /u01/app/12.1.0.2/grid/bin/ocrconfig -showbackup
Na mquina nerv01, altere a localizao dos backups fsicos do OCR.
# /u01/app/12.1.0.2/grid/bin/ocrconfig -backuploc +FRA
# /u01/app/12.1.0.2/grid/bin/ocrconfig -manualbackup
# /u01/app/12.1.0.2/grid/bin/ocrconfig -showbackup
Na mquina nerv01, faa um backup lgico do OCR.
# /u01/app/12.1.0.2/grid/bin/ocrconfig -export /home/oracle/OCR.bkp
# file /home/oracle/OCR.bkp

166166

Lab 13.4 OCR


Na mquina nerv09, simule uma falha nos discos do OCR.
# dd if=/dev/zero of=/dev/sda45 bs=512 count=1000000
# dd if=/dev/zero of=/dev/sda46 bs=512 count=1000000
# dd if=/dev/zero of=/dev/sda47 bs=512 count=1000000
O que aconteceu?
Nas mquinas nerv01 e nerv02, verifique o estado do Cluster.
# /u01/app/12.1.0.2/grid/bin/ocrcheck
# tail -f /u01/app/oracle/diag/crs/nerv01/crs/trace/alert.log
# /u01/app/12.1.0.2/grid/bin/crsctl status res -t
# /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all
# /u01/app/12.1.0.2/grid/bin/ocrconfig -manualbackup
Nas mquinas nerv01 e nerv02, desabilite o incio automtico do CRS, e reinicie.
# /u01/app/12.1.0.2/grid/bin/crsctl disable crs
# reboot
Na mquina nerv01, inicie o CRS em modo exclusivo.
# /u01/app/12.1.0.2/grid/bin/crsctl start crs -excl -nocrs

167

Lab 13.5 OCR


Na mquina nerv01, particione novamente os discos do OCR.
# fdisk /dev/sd?
n <enter>
p <enter>
1 <enter>
<enter>
<enter>
w <enter>
...
Na mquina nerv02, execute novamente a deteco dos novos discos.
# partprobe /dev/sd?
# partprobe /dev/sd?
# partprobe /dev/sd?
Na mquina nerv01, crie novamente os discos do OCR.
# /etc/init.d/oracleasm createdisk DISK11 /dev/sd?1
# /etc/init.d/oracleasm createdisk DISK12 /dev/sd?1
# /etc/init.d/oracleasm createdisk DISK13 /dev/sd?1
Na mquina nerv02, execute a deteco dos discos criados.
# /etc/init.d/oracleasm scandisks

168

Lab 13.6 OCR


Na mquina nerv01, recrie o DISK GROUP do OCR.
$ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP OCR NORMAL REDUNDANCY DISK 'ORCL:DISK11' ,
'ORCL:DISK12' , 'ORCL:DISK13';
SQL> ALTER DISKGROUP OCR SET ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';
SQL> ALTER DISKGROUP OCR SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.0.0';
Na mquina nerv01, restaure o OCR.
# /u01/app/12.1.0.2/grid/bin/ocrconfig -restore ...
Na mquina nerv01, reinicie o CRS.
# /u01/app/12.1.0.2/grid/bin/crsctl enable crs
# reboot
Na mquina nerv02, inicie o CRS.
# /u01/app/12.1.0.2/grid/bin/crsctl enable crs
# reboot

169

169

oifcfg

170

170

oifcfg
A partir de qualquer Node, controla todos.

Deve ser utilizado com o usurio root.

Ferramenta para administrao dos IPs Pblico, Interconnect, e VIPs.

Necessrio para alterao de rede dos Nodes.

Hostnames no podem ser alterados (s os VIPs).

171

171

LAB 13 oifcfg
Hands On !

172

172

Lab 14.1 oifcfg


Execute um backup fsico do OCR.
Verifique as Interfaces atuais, nos dois Nodes. Guarde o resultado.
# /u01/app/12.1.0.2/grid/bin/oifcfg getif
Em apenas um Node, adicione a nova Interface ao Interconnect.
# /u01/app/12.1.0.2/grid/bin/oifcfg setif -global eth1/192.168.3.0:cluster_interconnect
Pare o Cluster nos dois Nodes.
# /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
Logar no ambiente grfico do nerv01 como root, e alterar o IP do Interconnect.
Logar no ambiente grfico do nerv02 como root, e alterar o IP do Interconnect.
No nerv01, alterar /etc/hosts para os novos IPs.
No nerv02, alterar /etc/hosts para os novos Ips.
Reinicie a rede nos dois Nodes.
# service network restart
173

Lab 14.2 oifcfg


Inicie o Cluster nos dois Nodes.
# /u01/app/12.1.0.2/grid/bin/crsctl start cluster -all
Em apenas um Node, remova a antiga Interface, aps o CRS iniciar.
# /u01/app/12.1.0.2/grid/bin/oifcfg getif
# /u01/app/12.1.0.2/grid/bin/oifcfg delif -global eth1/192.168.1.0
# /u01/app/12.1.0.2/grid/bin/oifcfg getif
Execute um backup fsico do OCR.

174

Rolling Patch

175

175

Rolling Patch
Permite aplicao de Patches sem indisponibilidade.

Para-se uma Instance, aplica-se o Patch, inicia a Instance, e parte para a Instance
seguinte.

O Patch precisa suportar o Rolling Upgrade.

No pode ser utilizado com HOMEs compartilhados.

No pode ser utilizado para Patchsets.

176

176

LAB 14 Rolling Patch


Hands On !

177

177

Lab 15.1 Rolling Patch


Atualize o OPatch.
$ unzip -q p6880880_121010_Linux-x86-64.zip
$ mv $ORACLE_HOME/OPatch/ $ORACLE_HOME/OPatch.BACKUP
$ mv /home/oracle/OPatch $ORACLE_HOME
Pare a Instance ORCL1 da mquina nerv01.
/u01/app/12.1.0.2/grid/bin/srvctl stop instance -d ORCL -n nerv01
Aplique o Patch a partir da mquina nerv01.
$ unzip -q p19303936_121020_Linux-x86-64.zip
$ cd /home/oracle/19303936/
$ $ORACLE_HOME/OPatch/opatch apply
Quando a aplicao do Pacth solicitar, em outro terminal, inicie a instncia
da mquina nerv01 e pare a instncia da mquina nerv02. Em seguida,
prossiga com a aplicao do Patch.
/u01/app/12.1.0.2/grid/bin/srvctl start instance -d ORCL -n nerv01
/u01/app/12.1.0.2/grid/bin/srvctl stop instance -d ORCL -n nerv02

178

Lab 15.2 Rolling Patch


Aps a aplicao, inicie a instncia da mquina nerv02.
/u01/app/12.1.0.2/grid/bin/srvctl start instance -d ORCL -n nerv02
Nas mquinas nerv01 e nerv02, execute a etapa de alteraes de SQL do
Patch, conforme README.
cd $ORACLE_HOME/OPatch
./datapatch -verbose

179

Single Instance x RAC


Full Table Scans (Optimizer Statistics, System Statistics)

Bind Variables / Cursor Sharing

Sequences / Sequences Artificiais

Dictionary

Reverse Key Indexes

Job x Scheduler

V$SESSION

UTL_FILE

Directories / External Tables / Data Pump

Partitioning

180

180

LAB 16 Sequences
Hands On !

181

181

Lab 16.1 Sequences


Crie uma entrada no tnsnames.ora para o RAC vizinho ao seu.
Conecte no RAC vizinho como SCOTT/TIGER.
Analise e execute o script create_sequence.sql.
Analise e execute o script insert_sequence_lenta.sql
Analise e execute o script insert_sequence_rapida.sql
Repita os scripts de insert algumas vezes e veja se o resultado continua similar.
possvel melhorar este tempo?

182

182

Load Balance (Client)

183

183

Load Balance (Client)


Client tnsnames.ora
ORCL =
(DESCRIPTION=
(LOAD_BALANCE=ON)
Escolha Aleatria
(FAILOVER=ON)
Tenta acessar o 1o, depois o 2o
(ADDRESS=(PROTOCOL=TCP)(HOST=nerv01-vip)(PORT=1521))
VIP
(ADDRESS=(PROTOCOL=TCP)(HOST=nerv02-vip)(PORT=1521))
VIP
(CONNECT_DATA=
(SERVICE_NAME=ORCL)
(FAILOVER_MODE=
(TYPE=SELECT)
SESSION ou SELECT
(METHOD=BASIC)
BASIC ou PRECONNECT
(RETRIES=10)
10 tentativas de conexo
(DELAY=1)
1 segundo para cada tentativa
)
)
)

184

184

Load Balance (Server)

185

185

Load Balance (Server)

186

186

Load Balance (Server)


Client tnsnames.ora:
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac01-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL)
))
Parmetro LOCAL_LISTENER dos Nodes:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)
(HOST=nerv01-vip)
(PORT=1521))))
Parmetro REMOTE_LISTENER dos Nodes:
rac01-scan:1521

187

187

Load Balance (Server)


Services Goal
GOAL_NONE
GOAL_SERVICE_TIME
GOAL_THROUGHPUT

Bom para OLTP


Bom para Batch / OLAP / DBA

Services Connection Load Balance Goal


CLB_GOAL_SHORT Bom para OLTP
CLB_GOAL_LONG Bom para Batch / OLAP / DBA ou OLTP Persistente

188

188

Load Balance (Server) - Scheduler


Crie uma Job Class no Scheduler para o Service DBA:
BEGIN
DBMS_SCHEDULER.create_job_class (
job_class_name => 'DBA_JOB_CLASS',
service
=> 'DBA_SERVICE');
END;
/
Crie um Job no Scheduler para utilizar o Service DBA:
BEGIN
DBMS_SCHEDULER.create_job (
job_name
=> 'SYS.DBA_JOB_TEST',
job_type
=> 'PLSQL_BLOCK',
job_action => 'BEGIN EXEC DBMS_STATS.GATHER_DATABASE_STATS; END;',
start_date
=> SYSTIMESTAMP,
repeat_interval => 'FREQ=DAILY;',
job_class
=> 'SYS.DBA_JOB_CLASS',
end_date
=> NULL,
enabled
=> TRUE,
comments
=> 'Job linked to the DBA_JOB_CLASS.');
END;
/

189189

Load Balance (Server)


srvctl add service
srvctl remove service
srvctl modify service
srvctl relocate service
srvctl status service
srvctl start service
srvctl stop service
srvctl enable service
srvctl disable service

190190

LAB 16 Load Balance (Server)


Hands On !

191

191

Lab 17.1 Load Balance (Server)


Veja os Services configurados.
$ srvctl config database -d ORCL
Crie Services para OLTP, BATCH, e DBA.
Utilize opes especficas para cada tipo de Service (GOAL, CLB, etc.).
Exemplo:
$ srvctl add service -db ORCL -service OLTP -preferred ORCL2 -available ORCL1
-tafpolicy PRECONNECT -policy AUTOMATIC -failovertype SELECT -failovermethod
BASIC -failoverdelay 1 -failoverretry 10 -clbgoal SHORT -rlbgoal SERVICE_TIME
-notification TRUE
Como ver detalhes de um Service j configurado?
Altere o tnsnames.ora para utilizar os novos Services e teste o Failover.

192192

Adicionar e Remover Nodes

193

193

Adicionar e Remover Nodes


Sequncia para adicionar Node:

Instalar Hardware;

Instalar e configurar Sistema Operacional;

Configurar acesso ao Storage;

Configurar ssh sem senha com os Nodes j existentes;

Instalar Grid Infraestruture a partir de um Node existente;

Instalar Oracle a partir de um Node existente;

Adicionar Instances.
Sequncia para remover Node:

Remover Instance;

Remover Oracle;

Remover Clusterware.

194

194

LAB 18 Adicionar Nodes


Hands On !

195

195

Lab 18.1 Adicionar Nodes


Manter apenas 1 RAC ativo na sala.
No restante das mquinas, refazer os Labs 1, 3, e 4.
Para todas as mquinas, configurar SSH sem senha para o usurio oracle.
Instalar Grid Infraestruture nas outras mquinas, a partir de um Node existente:
$ cd $GRID_HOME/addnode
$ ./addnode.sh -silent CLUSTER_NEW_NODES={nerv03}
CLUSTER_NEW_VIRTUAL_HOSTNAMES={nerv03-vip}
Nas outras mquinas, com o usurio root, execute os seguintes scripts.
# /u01/app/oraInventory/orainstRoot.sh
# /u01/app/12.1.0.2/grid/root.sh
Instalar Oracle Database nas outras mquinas, a partir de um Node existente:
$ cd $ORACLE_HOME/addnode
$ ./addnode.sh -silent "CLUSTER_NEW_NODES={nerv03}"
Nas outras mquinas, com o usurio root, execute o script abaixo.
# /u01/app/oracle/product/12.1.0.2/db_1/root.sh

196

Lab 18.2 Adicionar Nodes


Na mquina nerv01, execute a adio da instncia.
$ $GRID_HOME/bin/srvctl add instance -d ORCL -i ORCL3 -n nerv03
Na mquina nerv01, conclua a adio do n.
SQL> ALTER SYSTEM SET INSTANCE_NUMBER=3 SID='ORCL3' SCOPE=SPFILE;
SQL> ALTER DATABASE ADD LOGFILE THREAD 3;
SQL> ALTER DATABASE ADD LOGFILE THREAD 3;
SQL> CREATE UNDO TABLESPACE UNDOTBS3;
SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS3 SID='ORCL3'
SCOPE=SPFILE;
$ $GRID_HOME/bin/srvctl start instance -d ORCL -i ORCL3
Na mquina nerv01, verifique a nota do RAC.
$ cd /home/oracle
$ unzip orachk.zip
./raccheck

197

Flex ASM

198

198

Flex ASM

199

199

LAB 19 Flex ASM


Hands On !

200

200

Lab 19.1 Flex ASM


Na mquina nerv01, verifique a configurao atual do Cluster.
$ $GRID_HOME/bin/asmcmd showclustermode
$ $GRID_HOME/bin/srvctl status asm
$ $GRID_HOME/bin/srvctl config asm
Na mquina nerv01, execute a alterao para Flex ASM.
$ $GRID_HOME/bin/asmca -silent -convertToFlexASM -asmNetworks eth1/192.168.3.0
-asmListenerPort 1522
# /u01/app/oracle/cfgtoollogs/asmca/scripts/converttoFlexASM.sh
Na mquina nerv01, verifique a configurao atual do Cluster.
$ $GRID_HOME/bin/srvctl status asm
$ $GRID_HOME/bin/srvctl config asm
$ $GRID_HOME/bin/srvctl modify asm -count 2
$ $GRID_HOME/bin/srvctl status asm
$ $GRID_HOME/bin/srvctl config asm

201

Flex Cluster

202

202

Flex Cluster

203

203

LAB 20 Flex Cluster


Hands On !

204

204

Lab 20.1 Flex Cluster


Na mquina nerv01, verifique a configurao atual do Cluster.
# /u01/app/12.1.0.2/grid/bin/crsctl get cluster mode status
# /u01/app/12.1.0.2/grid/bin/srvctl config gns
Na mquina nerv01, altere a configurao do Cluster.
# /u01/app/12.1.0.2/grid/bin/srvctl add gns -vip 192.168.0.191
# /u01/app/12.1.0.2/grid/bin/srvctl start gns
# /u01/app/12.1.0.2/grid/bin/crsctl set cluster mode flex
Na mquina nerv01, altere a configurao do Cluster.
# /u01/app/12.1.0.2/grid/bin/crsctl get node role config -all
# /u01/app/12.1.0.2/grid/bin/crsctl set node role leaf -node nerv07
# /u01/app/12.1.0.2/grid/bin/crsctl set node role leaf -node nerv08
# /u01/app/12.1.0.2/grid/bin/crsctl get node role config -all

205

Melhores Prticas

206

206

Melhores Prticas
- RAC de dois ns possui diversas limitaes.
- Utilize Hardware Certificado para sua implementao.
- Elimine os POF: NICs, Switch, Storage, etc.
- O Switch do Interconnect deve ser fsico e exclusivo.
- Utilize ASM. a direo da Oracle.
- Utilize GNS. a direo da Oracle.
- Utilize /etc/hosts, alm do DNS.
- No ASM, utilize DGs separados para DATA, FRA, OCR e VD.
- Centralize os backups do OCR.
- Utilize DATA e FRA com redundncia no Storage ou ASM.
- Utilize OCR e VD com redundncia no Storage e ASM.
- Use HOMEs locais, para GRID_HOME e ORACLE_HOME.
- Logs (todos) devem ser centralizados. De nada vale um log que no existe.
- Observe tambm os Logs do Sistema Operacional.
- Utilize Jumbo Frames no Interonnect.
- Utilize Huge Pages, e no utilize AMM.
- O RAC torna aplicaes boas em timas, e ruins em pssimas.
- Esquea SIDs. Aprenda Services.
- Particione sua aplicao em Services. Sem Services, o RAC no nada.
207

E agora?

208

208

Frum

209

Alunos

210

Blog

211

YouTube

212

Facebook / Twitter

213

Obrigado!

214

214

You might also like