You are on page 1of 4

https://evaluation-education.oracle.com/pls/apex/f?

p=229:11:130436478992824:FETCH:NO::P11_CLASS_ID:5084441&cs=1KZoprg3Gt7ZJrLSdXGtorWy
3EoQ

https://en.wikipedia.org/wiki/Spoke%E2%80%93hub_distribution_paradigm
https://en.wikibooks.org/wiki/RAC_Attack_-
_Oracle_Cluster_Database_at_Home/RAC_Attack_12c
https://docs.oracle.com/database/121/CWADD/bigcluster.htm#CWADD92647
https://access.redhat.com/sites/default/files/attachments/deploying_oracle_rac_12c_
rhel7_v1.1_0.pdf
https://access.redhat.com/sites/default/files/attachments/deploying-oracle-12c-on-
rhel6_1.2_1.pdf
https://www.doag.org/formes/pubfiles/6459598/docs/Konferenz/2014/vortraege/Oracle
%20Infrastruktur/2014-INF-Robert_Bialek-
Oracle_Grid_Infrastructure_12c_in_der_Praxis-Manuskript.pdf
https://asanga-pradeep.blogspot.co.at/2015/09/changing-hub-node-to-leaf-node-and-
vice.html
GNS configuration:
https://martincarstenbach.wordpress.com/2011/11/17/simplified-gns-setup-for-rac-11-
2-0-2-and-newer/
http://72.32.201.108/rac/Oracle_RAC_with_GNS.html
Joel Goodman - Reading/London - https://dbatrain.wordpress.com/
Harald van Breederode - Utrecht - https://prutser.wordpress.com/
https://prutser.files.wordpress.com/2016/06/demystifying.pdf
https://uhesse.com/
https://mikedietrichde.com/
https://www.ogh.nl/downloads/OGH20091103_RENE_KUNDERSMA_ORACLE.pdf

MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)


Oracle Recommended Patches -- Oracle Database (Doc ID 756671.1)
Configuring DBFS on Oracle Exadata Database Machine (Doc ID 1054431.1)

FLEX ASM :
crsctl status resource ora.asm -f | grep CARDINALITY
srvctl modify asm -count 2
srvctl status asm -detail
srvctl stop asm -node host03 -f
srvctl status asm -detail
srvctl start asm -node host03
srvctl status asm -detail
srvctl relocate asm -currentnode host01 -targetnode host03 --expected behaviour
srvctl modify asm -count 1 --expected behaviour
srvctl relocate asm -currentnode host02 -targetnode host03
srvctl start asm -proxy -node host03
asmcmd showclustermode

/u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh

CRS-1665: maximum number of cluster Hub nodes reached; the CSS daemon is
terminating
CRS-2883: Resource 'ora.cssd' failed during Clusterware stack start.

host02:

crsctl get node role config


crsctl set node role leaf
crsctl stop crs
crsctl start crs -wait

check for leaf_listener

on remaining HUB nodes : host01,host03: as grid


/u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList
ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={host01,host03}" -silent -local
CRS=TRUE

on converted node (host02) as grid:


/u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList
ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={host02}" -silent -local CRS=TRUE

check :
crsctl stop cluster -all
crsctl start cluster -all
crsctl stat res -t -- look for leaf_listener

Umgekehrt :
crsctl get node role config
crsctl set node role hub
crsctl stop crs
crsctl start crs -wait
on ALL HUB nodes : host01,host02,host03: as grid
/u01/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList
ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NODES={host01,host02,host03}" -silent
-local CRS=TRUE

dann set :
crsctl set node role auto -node host03
crsctl get node role config -node host03
crsctl get cluster hubsize
crsctl set cluster hubsize 2
restart cluster on host 3

https://en.wikipedia.org/wiki/Deadline_scheduler
1016 nslookup host01-vip.cluster01.example.com 192.0.2.155
1017 dig @192.0.2.155 host01-vip.cluster01.example.com

! DNFS on 12c Homes must be explicitely activated.


Cluster Robustness Framework (CRF)

CLUVFY :
cluvfy comp peer -refnode host01 -n host03 -orainv oinstall -osdba asmdba -verbose
cluvfy stage -post hwos -n host03
cluvfy stage -post nodeadd -n host03 -verbose - also shows IPs and can do the
dig/nslookup command

root@host01:
crsctl unpin css -n host03
grid@host03:
cd /u01/app/12.1.0/grid/oui/bin/
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid
"CLUSTER_NODES=host03" CRS=TRUE -silent -local
/u01/app/12.1.0/grid/deinstall/deinstall -local
grid@remaining node :
cd /u01/app/12.1.0/grid/oui/bin/
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid
"CLUSTER_NODES={host01,host02}" CRS=TRUE -silent
root@remaining node :
crsctl delete node -n host03

Optional :
srvctl stop vip -i vip_name -f
srvctl remove vip -i vip_name -f

cluvfy stage -post nodedel �n host03 [-verbose]

===================================================================================
===
ASM - Multipath : Best Practices
Intent
This BP book assumes, that you have already decided to use Oracle Automatic Storage
Management (ASM) and describes, how to configure the disk devices.
Implementation
Storage should be configured to use multipathing to prevent from having a single
point of failure (SPOF). Storage devices will most probably not be local physical
devices,
but virtual devices, being part of a Storage Area Network (SAN) presented to your
machine as a 'Logical Unit Number' (LUN) each. In case you have LUNs, you
will have corresponding entries in directory /dev/mapper having names representing
SCSI-WWIDs like e.g. 360002ac0000000000000000200009982. To identify those
LUNs effectively and to assign proper ownership and access rights, there are two
configuration files, namely the multipath.conf file and a so called udev
configuration
file.
multipath.conf is located in the /etc directory, while the
udev configuration files are located in directory /etc/udev/rules.d
multipath.conf
This file is intended to be used to configure multipathing, however it also offers
means to convert SCSI-WWIDs into human readable names. You can also specify
ownership,
group access and acces modes, however these parameters will be no longer supported
in future and seem to not work properly. The format of the multipath.conf
file

# Configuration example
... # initial setup lines as e.g. default and device configuration
multipaths {
multipath {
wwid 360002ac0000000000000003800009984
alias asmdisk_a_01
mode 0660 # setting file access mode is deprecated and will not be available in
future. gid <group id of OS user grid> # setting group in multipath.conf is
deprecated and will not be available in uid <user id of OS user grid> # setting
ownership in multipath.conf is deprecated and will not be available }
multipath {
wwid 360002ac0000000000000002300009984
alias asmdisk_a_02
mode 0660 # cf. mode above
gid <group id of OS user grid> # cf. gid above
uid <user id of OS user grid> # cf. uid above
}
multipath {
wwid 360002ac0000000000000005400009985
alias asmdisk_b_01
mode 0660 # cf. mode above
gid <group id of OS user grid> # cf. gid above
uid <user id of OS user grid> # cf. uid above
}
}

udev rules file


udev rules are configured using configuration files in /etc/udev/rules.d Files are
processed sequentally, hence their names are inititated by numbers as e.g.
/etc/udev/
rules.d/55-asm.rules The format of the udev rules file for asm disks is

# Configuration example, specify disk pattern, group assignment, owner and access
mode here, rather than in multipath.conf file.
# The disk path name is to be specified relatively to the /dev directory
KERNEL=="mapper/asm*", GROUP="dba", OWNER="oracle",MODE="0660"
Independent of which method you are planning to use: It is important, to assign the
disks to oracle:dba and to configure the access mode to 0660.
===================================================================================
=============

polic managed clusters :


crsctl status server host01 -f -h ---h shows some interesting attributes e.g. co,
st, en, nc, etc.

/u01/app/12.1.0/grid/OPatch/oplan/oplan generateApplySteps
/stage/psu/25434018/25434003/

You might also like