Professional Documents
Culture Documents
the
Hood
of
the
Next
GeneraEon
of
Oracle
Real
ApplicaEon
Clusters
BeRer
availability
BeRer
scalability
Ecient
management
(due
to
reduced
(for
singleton
services)
for
large
scale
deployments
reconguraEon
Emes)
BeRer
availability
BeRer
scalability
Ecient
management
(due
to
reduced
(for
singleton
services)
for
large
scale
deployments
reconguraEon
Emes)
Oracle
RAC
scalability
is
independent
of
the
Oracle
RAC
scales
number
of
nodes
most
of
the
does
not
require
enterprise
soluEons
applicaEon
changes
used
today
(unlike
sharding)
hRp://www.slideshare.net/MarkusMichalewicz/
oracle-rac-customer-proven-scalalbility
oracle-rac-internals-the-cache-fusion-ediEon
NEW:
hRp://www.slideshare.net/MarkusMichalewicz/
hRp://www.slideshare.net/MarkusMichalewicz/
applicaEon-development-best-pracEces-for-oracle-
paper-oracle-rac-internals-the-cache-fusion-ediEon
real-applicaEon-clusters-rac
www.slideshare.net/MarkusMichalewicz/oracle- hRp://www.slideshare.net/MarkusMichalewicz/
mulEtenant-meets-oracle-rac-ioug-2014-version
oracle-database-inmemory-meets-oracle-rac
h'p://www.slideshare.net/MarkusMichalewicz
Copyright
2014
Oracle
and/or
its
aliates.
All
rights
reserved.
|
8
Oracle
RAC
12c
Release
2
Scaling
in
Two
Dimensions
gridSetup.sh
can
be
used
to
add
nodes!
Aier
the
installaEon,
any
Oracle
RAC
12.2
Cluster
A
GNS
(just
an
IP,
no
domain-delegaEon)
is
required
is
an
all-HUB
Flex
Cluster,
using
for
Leaf
nodes
to
nd
HUBs
in
a
Flex
Cluster.
If
Leaf
Flex
ASM
with
count=3
(count
=
all
aier
upgrade).
nodes
are
added
later,
a
GNS
must
be
added
rst.
This
setup
compares
to
the
pre-12.2
standard
cluster.
*
Read-only
WL
on
Leaf-instances
will
scale.
Use
Case
1:
Massive
Parallel
Query
RAC
Use
Case
2:
RAC
Reader
Nodes
Overlay
your
Hadoop
Cluster
(HDFS)
with
an
Oracle
Flex
Use
Read-Only
workload
(WL)
on
read-mostly
Leaf
node
instances
for
adoc
data
analysis
scaled
across
hundreds
of
Cluster
to
access
data
in
Hadoop
via
SQL
and
perform
nodes
with
no
delay
in
accessing
updated
data,
without
any
cross-data
(adhoc)
analysis
using
standard
interfaces.
impact
on
OLTP
performance*
and
with
beRer
HA**
Connect
Leaf
nodes
to
storage
Install
Oracle
Database
Home
Extend
public
network
to
Leaf(s)
Leaf
nodes
for
applicaEons
do
not
on
all
nodes
and
as
needed.
For
RAC
Reader
Nodes
use
case
only,
require
direct
storage
access;
If
you
ever
want
to
run
a
database
enable
a
public
network
connecEon
running
database
instances
on
Leaf
instance
on
a
Leaf
node,
it
needs
a
on
Mars
by
extending
the
network
nodes
does.
database
home
as
any
other
node.
and
listener
resources
to
the
leaf.
Create
a
Policy-Managed
RAC
DB
For
Massive
Parallel
Query
RAC,
For
RAC
Reader
Nodes,
RAC
Reader
Nodes
as
well
as
Massive
create
new
server
pools
along
Create
database
on
HUB
nodes
Parallel
Query
RAC
require
a
Policy- with
the
database.
the
addiEon
of
database
instances
Managed
database.
Admin-managed
Make
sure
to
create
a
on
Leaf
nodes
is
dynamic
and
DBs
cannot
be
extended
to
Leafs.
Parallel
Query
Server
Pool.
managed
via
command
line.
(Re-)starEng
the
OLTPWL
Service
nalizes
the
DWHWL
service
setup.
For
RAC
Reader
Nodes,
Summary
Connect
add
a
Reader
Farm
(RF)
pool
Note
that
if
a
Leaf
node
is
used
for
to
the
system
using
the
add
Massive
Parallel
Query
RAC,
it
should
service
command
(dynamic).
not
allow
for
direct
connecEons
to
the
Leaf
node
instance.
BeRer
availability
BeRer
scalability
Ecient
management
(due
to
reduced
(for
singleton
services)
for
large
scale
deployments
reconguraEon
Emes)
Availability
due
to
Improved
availability
for
all-HUB,
Flex
Cluster-based
availability
Autonomous
Health
Framework
Standalone
Clusters
here:
Node
WeighNng
conNnuously
working
for
you
Availability
due
to
Improved
availability
for
all-HUB,
Flex
Cluster-based
availability
Autonomous
Health
Framework
Standalone
Clusters
here:
Node
WeighNng
conNnuously
working
for
you
4x
faster
Pluggable
Database
and
Service
IsolaNon
Near
Zero
DownNme
Recong.
via
Buddy
Instances
improves
availability
by
ensuring
that
instance
failures
of
which
track
modied
data
blocks
on
other
nodes
to
quickly
instances
only
hosEng
singleton
PDBs
will
not
impact
other
idenEfy
blocks
requiring
recovery,
which
allows
for
rapid
instances
of
the
same
RAC-based
CDB.
processing
of
new
transacEons
in
case
recovery
is
needed.
Availability
due
to
Improved
availability
for
all-HUB,
Flex
Cluster-based
availability
Autonomous
Health
Framework
Standalone
Clusters
here:
Node
WeighNng
conNnuously
working
for
you
the
workload
hosted
in
the
cluster
during
fencing
1
2
The
idea
is
to
let
the
majority
of
work
survive,
if
everything
else
is
equal
Example:
In
a
2-node
cluster,
the
node
hosEng
the
majority
of
services
(at
fencing
Eme)
is
meant
to
survive
A
three
node
cluster
will
benet
from
Node
WeighEng,
Secondary
failure
consideraNon
if
three
equally
sized
sub-clusters
are
A
fallback
scheme
can
inuence
which
node
survives.
built
as
s
result
of
the
failure,
since
is
applied
if
consideraEons
do
not
two
dierently
sized
sub-clusters
are
Secondary
failure
consideraEon
lead
to
an
acEonable
outcome.
not
equal.
will
be
enhanced
successively.
srvctl
modify
database
-help
|grep
criEcal
-css_criEcal
{YES
|
NO}
Dene
whether
the
database
or
service
is
CSS
criEcal
BeRer
availability
BeRer
scalability
Ecient
management
(due
to
reduced
(for
singleton
services)
for
large
scale
deployments
reconguraEon
Emes)
$ORACLE_HOME/gridSetup.sh
Shared ASM
Congure
an
Oracle
Domain
Services
Cluster
Congure
an
Oracle
Create
a
credenNal
le
for
each
Run
gridSetup
on
the
server
on
Domain
Services
Cluster
(DSC)
Member
Cluster
you
want
to
which
you
want
to
run
the
as
part
of
the
gridSetup-based
deploy
and
make
it
accessible
to
Member
Cluster
install
and
install.
A
DSC
install
follows
the
the
server
on
which
you
will
run
provide
access
to
the
credenNal
Standalone
Cluster
install.
the
Member
Cluster
install.
le
when
requested.
Then
follow
the
instrucEons
on
the
screen.
Copyright
2015,
Oracle
and/or
its
aliates.
All
rights
reserved.
|
37
Proven
Features
Even
More
Benecial
on
the
DSC