Professional Documents
Culture Documents
GrepOra Team
All rights reserved GrepOra Team - http://grepora.wordpress.com
| GREP ORA
| sort quality | head 200
Este é um site independente. As opiniões publicadas neste blog são pessoais e não
representam a visão da Oracle ou qualquer outra instituição, conforme expresso em:
CC BY-NC 3.0
4
About the Blog
GrepOra is a blog between friends to learn and share about our daily experiences and
challenges with Oracle technologies.
The blog started in Jan/2015 called MatheusDBA , focused in Oracle Database stuff,
being written only by Matheus. In November 2015, ore authors specialized in other
Oracle related technologies joined Matheus as new authors and the blog was
renamed to GrepOra.com .
The common thing is that we all work with Oracle by different ways: Database,
Middleware, Integration and Application. Someday we realized we’re always having
conversations and frequently about Oracle stuffs. So we decided to make a “ grep” in
these conversations to filter those are related to Oracle and share.
And this is the origin for the name “ GrepOra.com” (or |GREP ORA).
Grepora is also our way to say “thank you” to community and return part of all
learning we got through blogs and communities.
Feel yourself welcome to read the book and the blog, follow, share and get in touch
with us.
It’ll be great to be with you in every post!
To know more about each one of us, access the Members section (
https://grepora.com/members/ ) in the blog or take a look in next pages.
To understand the posting schedule, access Posting Schedule section in the blog (
https://grepora.com/agenda/ ).
Sincerely,
Matheus , Maiquel, Dieison, Rafael, Jackson and Cassiano.
5
GrepOra.com in 2016…
Hello!
Today’s post is to share with you some information about what 2016 represented for
GrepOra.com .
In 2016, the first official year of GrepOra.com, we had over 26,000 accesses from
more than 160 different countries . Indeed, almost every country in the world was in
GrepOra.com this year. And this is spectacular considering we discuss very specific
topics about Oracle Database and Applications.
The accesses are still growing every day, which show us we can expect even bigger
numbers to celebrate in 2017. See below our monthly accesses graph of 2016.
Besides that, some accomplishments make us even prouder, like being recognized by
OTN LA ( Oracle Technology Network – Latin America ) as a technical reference blog
in Database Management and Performance category .
6
Since this recognition in June, we have the OTN LA logo in our blog page. Also since
August, we have the GUOB logo, once I participated in last GUOB Tech Day as
Official Blogger.
All this, however, was not achieved only by having the blog. Since the beginning we
organized the weekly posting schedule and the author’s pages. The consistency prove
itself by our monthly access growth. The organization and commitment to keep
posting relevant content is what led us to this point.
Be sure we are preparing lots of news and even more quality content in GrepOra.com
for next year.
Matheus.
7
GrepOra Team
As already spoken, we are a group of friends that are crazy enough to share our
experiences with you and with Oracle community as a payback of our own
consumption.
In the next pages you are going to see some of our background and brief description
professionally talking. So, by now, we are only going to share some photos of our
occasional meetings.
8
(Maiquel, Rafael, Jackson, Matheus, Dieison and Cassiano)
(Last GrepOra Meeting – by now)
Let us know what you think about the book and the blog. Reach us out in social media
like LinkedIn and Twitter. Collaborate and engage to Community!
Cheers!
9
About the Book
Hello!
Welcome to our book, our blog and our world to have fun and view/review/learn/laugh
with some of our struggles and personal notes for ourselves in the future.
Those posts are basically our notes with some of ours discovers and tips to review in
the future. I believe everyone who works with that kind of technology have some
personal notes, right? So, ours are being published to share with you.
We believe in sharing and mutual growing, so feel free to reach us to share your notes
and tips, to fix anything you think to be wrong or can be better explained or everything.
This is not only GrepOra team’s blog. This is our blog. Which includes you.
Ok then. But we are publishing a book? Just why? Who is the target audience? How
should I read it? How is it structured? What should I expect?
Why:
This week we are completing 2 Years since the blog was created (in that time, called
MatheusDBA ). And we decided to review our best moments in these last years and
compile them for you. It’s, above all, a good opportunity to refresh some posts that
are still actual.
For who:
We are compiling it as a best moments review to engage new readers with the best
past posts and reach that readers that enjoy to read a book in their mobile reading
devices. Actually, we believe that writing material for this kind of media is the future (or
the present), so if you prefer to read PDF files in you Kindle, Ipad, or similar, specially
for those who prefer the offline mode to not being bothered by social media
notifications, instant messages and other: This is for you.
How to read:
This is a book generated by the best posts in the blog. If you read the blog you know
that the posts are not continuos and mostly have not relation between them. So, this is
a book to read some curiosities and tips, to learn and review some useful stuff and to
be aware about some daily basis challenges and struggles on working with Oracle
technologies. This is not a book to be read in sequence, chapters or something like
this. Feel free to read whatever you want and whatever you feel it’s interesting for
yourself and to get richer your own experience with Oracle techs… Simple like that.
The structure:
There is no boundaries for our posts and ideas. Of course we have specialities, but
everyone can write about everything. So there is no chapters of any restrictedly fixed
10
boundaries. However, to give a little sense, we kind of organized the posts by
following this (using our blog categories):
• ASM;
• Enterprise Manager;
• Cloud Computing;
• Heterogeneous Databases;
What to expect:
Basically: “To read some curiosities and tips, to learn and review some useful stuff
and to be aware about some daily basis challenges and struggles on working with
Oracle technologies”. But mostly: To have fun! This is a book written by Oracle geeks
to Oracle geeks.
11
ADRCI Retention Policy and Ad-Hoc Purge
Script for all Bases
As you know, since 11g we have a Automatic Diagnostic Repository (ADR). To better
manage it, we also have a Command-line Interface, called ADRCI.
ADR contains all diagnostic information for database (logs, traces, incidents,
problems, etc).
ADR Structure
The objective of this post, however, isn’t to show all good from ADRCI, but share a
how configure retention policiy and a quick script to clean logs from all homes in the
server:
They are setted by default with 720 hours (30 days) for the Short Term and 8760
hours (One year) for the long term category. See:
12
720 8760 2013-08-10 15:42:04.686159 +00:00 2016-04-25
20:53:28.159552 +00:00
We can change this by using the ADRCI command ‘set control’. Look at example for
changing the retention to 15 days for the Short Term policy attribute:
(note it’s defined by hours!)
adrci purge
There is a lot of scripts on the net. But my personal script for ad-hoc/manual purges is:
That’s it!
Have a nice day!
Matheus.
13
High CPU usage by LMS and Node
Evictions: Solved by Setting
“_high_priority_processes”
Another thing that may help you in environments with highly interdependent
applications:
Our env has high interconnect network block changing, and, as a consequence, high
CPU usage by Global Cache Services (GCS)/Lock Manager Server Process (LMS).
This way, for each little latency in the interconnect interface, we were having a node
eviction and all the impacts to the legacy application you can imagine (without gridlink
or any solution to make the relocation ‘transparent’, as is usual to legacy application)
and, of course, the business impact.
Oracle obviously suggested that we reduce the block concurrency over the cluster
nodes grouping the application by affinity. But, it’s just no applicable to our env…
When nothing seemed to help, the workaround came from here: Top 5 Database
and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1) .
No magic, but the problem stopped to happen. After that, we’re having some warnings
about clock synchronization over the cluster nodes on CRS alerts. Like this:
CRS-2409:The clock on host proddb1 is not synchronous with the mean cluster time.
No action has been taken as the Cluster Time Synchronization. Service is running in
observer mode.
I believe it happens because VKTM lost priority. But it’s OK: The node evictions has
stopped!
Matheus.
14
Application Looping Until Lock a Row with
NOWAIT Clause
Yesterday I treated an interesting situation:
A BATCH stayed on “SQL*Net message from client” event but the last_call_et was
always on 1 or 0. Seems OK, with some client contention to send the commands to
the DBMS, right? Nope.
It was caused by a loop in the application code “waiting” for a row lock but without
“DBMS waiting events” (something like “ select * from table for update nowait” ). Take
a look in how it was identified below.
As you see, with no idea about what is happening, I started a trace. The trace was
stuck with this:
15
AHÁ!
Did you see the “err=54” there? Yes. You know this error:
It’s caused by a
in the code.
But, this select is in a loop, so the session don’t go ahead until have it.
(Obviously it could be coded with some treatment/better logic for this loop and errors,
buuuut…)
For that, we use the “obj#” and “value” , also bolded in the trace.
As I know the application, I know that the used field in all “where clauses” is the
“RECNO” column. But if you don’t, it’s needed to discover. With this information in
mind:
AHÁ again!
The SID 11006. Let’s see who is there:
16
proddb2 @sid Sid:11006 Inst: SQL_ID SEQ# EVENT STATUS SID SERIAL# INST_ID
USERNAME -------------------- ---------- -------------------------------------- 9jzm6vn5j06js
24919 enq: TX - row lock contention ACTIVE 11006 44627 1
DBLINK_OTHER_BATCH_SCHEMA
Ok, it’s another session of a different batch process in a remote database holding this
row. As it’s less relevant, lets kill! Muahaha!
Then, you’ll see, my session get the lock and is in the middle of a transaction:
proddb1 @kill *** sid : 11006 serial : 44627 *** System altered. *** proddb1 @me
INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE ------- ---------- ---------- --------------- -------------------- 5 14174
479 MATHEUS_BOESING transaction UNKNOWN 2 4332 56037
MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT 1 12058 9
MATHEUS_BOESING class slave wait NO HOLDER
To release the “row locked” to my principal process, lets suicide (kill my own session,
this case, that is holding the row lock right now).
proddb5 @kill *** sid : 14174 serial : 479 *** System altered. ***
Now, with the problem solved, lets disable the trace and continue the other daily
tasks…
17
proddb2 @untrace Enter value for sid: 9796 Enter value for serial: 45117 PL/SQL
procedure successfully completed.
See ya!
Matheus.
18
VKTM Hang – High CPU Usage
Today a database (RHEL 6, single instance, 11.2.0.4) suddently started to “explode”
CPU on VKTM process (100% CPU).
After some minutes lost (completely) in support.oracle.com (there was just a few notes
about binary permissions on Solaris), I decided to make a McGayver by myself.
By Oracle words: “ VKTM acts as a time publisher for an Oracle instance. VKTM
publishes two sets of time: a wall clock time using a seconds interval and a higher
resolution time (which is not wall clock time) for interval measurements. The VKTM
timer service centralizes time tracking and offloads multiple timer calls from other
clients. ”
KB:
Master Note: Troubleshooting Oracle Background Processes (Doc ID 1509616.1)
Great post about hidden parameters:
http://oracleinaction.com/undocumented-params-11g/
Oficial one: http://www.orafaq.com/parms/index.htm
Hugs!
Matheus.
19
Oracle TPS: Evaluating Transaction per
Second
Sometimes this information has some ‘myth atmosphere’… Maybe because of that
Oracle doesn’t have this information very clear and it’s not the most useful metric.
But for comparison to another systems and also to performance/’throughput’ with
different infrastructure/database configuration, it can be useful.
20
dba_hist_sysstat WHERE stat_name IN ('user commits', 'user rollbacks')) SELECT
datetime, ROUND (SUM (delta_value) / 3600, 2) "Transactions/s" FROM hist_snaps
sn, hist_stats st WHERE st.instance_number = sn.instance_number AND
st.snap_id = sn.snap_id AND diff_time IS NOT NULL GROUP BY datetime ORDER
BY 1 desc;
I like to use PL/SQL Developer to see this kind of data. And it regards us to make very
good charts very quickly. I try it in a small database here, just for example:
Jedi Master Jonathan Lewis wrote a good post about Transactions and this kind of
AWR metric here .
See ya!
Matheus.
21
Leap Second and Impact for Oracle
Database
Don’t know what is this? Oh boy, I suggest you take a look…
It can sound a little crazy, but it’s about an universal time adjustment of atomic time.
Something like that. To understand, take a look on:
http://www.meinberg.de/english/info/leap-second.htm
http://en.wikipedia.org/wiki/Coordinated_Universal_Time
http://en.wikipedia.org/wiki/International_Atomic_Time
http://www.britannica.com/EBchecked/topic/136395/Coordinated-Universal-Time
http://www.britannica.com/EBchecked/topic/290686/International-Atomic-Time
Okey doke!
But what about Oracle Database adjustment? Good news: Nothing to do!
In Oracle words: “ The Oracle RDBMS needs no patches and has no problem with the
leap second changes on OS level. ”
But, attention!
If your application uses timestamp or sysdate, verify the adjust of the OS Level. If it
consists on a “60” second, it can result on “ ORA-01852 seen 60 seconds is a illegal
value for the date or timestamp dataype. ”
( Insert leap seconds into a timestamp column fails with ORA-01852 (Doc ID
1553906.1) )
22
(Doc ID 1472421.1)
(OEM on Linux): Enterprise Manager Management Agent or OMS CPU Use Is
Excessive near Leap Second Additions on Linux (Doc ID 1472651.1)
Matheus.
23
HANGANALYZE Part 1
Hi all!
I realized I have some posts about database hangs but have no posts about
hanganalyze, system state or ashdump usage. So let’s fix it.
To organize the ideas I’m going to split the subject on three posts. This first will be
about hanganalyse.
Ok, so let me refer the most clear Oracle words I could found:
“Hanganalyze tries to work out who is waiting for who by building wait chains, and
then depending on the level will request various processes to dump their errorstack.”
This is very similar to what we can do manually through v$wait_chains. But is quicker
and ‘oficial’, so let’s use!
But before I show how you can do it, it’s important to mention that Oracle does not
recommend you to use ‘numeric events’ without a SR (MOS), according to Note:
75713.1.
I prefer to use ORADEBUG on database server if possible, regarding you already are
with some hanging:
Level
Description
Comment
Could be useful…
24
2
Minimal output
can be a lot!
10
But take care! Using too high a level will cause lots of processes to be asked to dump
their stack. This can be very expensive…
In summary, Remember the Note: 75713.1!
SQL oradebug setmypid SQL oradebug unlimit SQL oradebug setinst all SQL
oradebug -g def hanganalyze LL
OR
SQL oradebug setmypid SQL oradebug unlimit SQL oradebug -g all hanganalyze LL
25
============== HANG ANALYSIS: ============== Open chains found: This
process (below) is running Chain 1 : : Below is a wait chain. Sid 16 waits for Sid 17
Chain 2 : : -- Other chains found: Chain 3 : : Extra information that will be dumped at
higher levels: This just shows which nodes would be dumped at each level [level 4] : 2
node dumps -- [LEAF] [LEAF_NW] [IGN_DMP] [level 5] : 2 node dumps -- [NLEAF]
[level 10] : 10 node dumps -- [IGN] State of nodes All nodes are listed below. The
"state" column shows the state that the session is in
([nodenum]/sid/sess_srno/session/state/start/finish/[adjlist]/predecessor): The first
nodes are IGN (ignore) [0]/1/1/0x826f94c0/IGN/1/2//none
[1]/2/1/0x826f9d2c/IGN/3/4//none [2]/3/1/0x826fa598/IGN/5/6//none
[3]/4/1/0x826fae04/IGN/7/8//none [4]/5/1/0x826fb670/IGN/9/10//none
[5]/6/1/0x826fbedc/IGN/11/12//none [6]/7/1049/0x826fc748/IGN/13/14//none
[7]/8/1049/0x826fcfb4/IGN/15/16//none [8]/9/1049/0x826fd820/IGN/17/18//none
[9]/10/1049/0x826fe08c/IGN/19/20//none Below are LEAF nodes in various states
[12]/13/158/0x826ff9d0/LEAF_NW/21/22//none
[15]/16/416/0x82701314/NLEAF/23/26/[16]/none
[16]/17/941/0x82701b80/LEAF/24/25//15
[17]/18/344/0x827023ec/NLEAF/27/28/[16]/none You are told which processes are
being dumped They will dump errorstacks to their own trace files Dumping
System_State and Fixed_SGA in process with ospid 18668 Dumping Process
information for process with ospid 18656 Dumping Process information for process
with ospid 18658 ... ================================ PROCESS DUMP
FROM HANG ANALYZER: ================================ This process
dumps its errorstack and processstate. See for details of this informaiton ----- Call
Stack Trace ----- calling call entry ... ======================================
END OF PROCESS DUMP FROM HANG ANALYZER
====================================== ==================== END OF
HANG ANALYSIS ====================
State
Meaning
IGN
Ignore
LEAF
LEAF_NW
NLEAF
26
An element in a chain but not at the end (not a leaf)
Cool, right?
There is a very useful tool to analyze chains of hanging. And also generate files that
can be added to an SR, if needed.
OK, but I’m in a hang situation, what if a can’t loging as sysdba in my database?
This case, wait the next week post . There is a very useful kludge.
# KB:
Troubleshooting Database Hang Issues (Doc ID 1378583.1)
How to Collect Diagnostics for Database Hanging Issues (Doc ID 452358.1)
Troubleshooting Database Contention With V$Wait_Chains (Doc ID 1428210.1)
EVENT: HANGANALYZE – Reference Note (Doc ID 130874.1)
Important Customer information about using Numeric Events (Doc ID 75713.1)
Matheus.
27
HANGANALYZE Part 2
Hi!
See the first part of this post here: HANGANALIZE Part 1 .
But what if you are having difficult to access the database, even with ‘/ as sysdba’?
You can create a ‘preliminary connection’ without create a session, like this:
This ‘feature’ is available since Oracle 10g, and it basically skips a session creation
part (which could block) when logging on as SYSDBA.
The step 3 obviously can create some ‘lock’ once it’s allocating (locking) memory
(usually latches/KGX mutexes).
So, the preliminar connection consists in not execute step 3. And this is the reason it
solves ‘memory hangs’ situations…
But, there is another observation: With -prelim you are able to get a systemstate or an
ashdump, but since 11.2.0.2 you cannot get a hanganalize. The statements are
proccessed:
ERROR: Can not perform hang analysis dump without a process state object and a
session state object.
No problems, McGayver can be applied again, there is a kludge for the kludge: You
can use another ospid to generate the hanganalyse. It’s not recommended to use a
vital process (just to mention).
I listed some sessions connected on database and used one of them to generate the
hanganalyze:
28
oraclegreporadb (LOCAL=NO) oracle 2422 1 0 13:54 ? 00:00:00 oraclegreporadb
(LOCAL=NO) oracle 2565 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle
2567 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle 2569 1 0 13:55 ?
00:00:00 oraclegreporadb (LOCAL=NO) oracle 2571 1 0 13:55 ? 00:00:00
oraclegreporadb (LOCAL=NO) oracle 2573 1 0 13:55 ? 00:00:00 oraclegreporadb
(LOCAL=NO) oracle 2575 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle
2577 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) [oracle@devdb09 trace]$
sqlplus -prelim / as sysdba SQL oradebug setospid 2577 Oracle pid: 133, Unix
process pid: 2577, image: oracle@devdb09 SQL oradebug dump hanganalyze 3
Statement processed. SQL exit Disconnected from ORACLE
Ok, now the hanganalyze was generated on spid tracefile. Let’s see:
[oracle@devdb09 userdumpdest]$ ls -lrt |grep 2577 -rw-rw---- 1 oracle oracle 125 Jun
16 14:02 greporadb_ora_2577.trm -rw-rw---- 1 oracle oracle 2772 Jun 16 14:02
greporadb_ora_ 2577 .trc [oracle@devdb09 trace]$ cat greporadb_ora_ 2577 .trc
|grep hanganalyze Received ORADEBUG command (#1) 'dump hanganalyze 3' from
process 'Unix process pid: 4068, image: ' Finished processing ORADEBUG command
(#1) 'dump hanganalyze 3'
Awsome, hãn?
Matheus.
29
ASHDUMP for Instance Crash/Hang ‘Post
Mortem’ Analysis
Hi guys!
In the last weeks I talked about ASHDUMP in the post HANGANALYZE Part 1 . Let’s
think about it now…
Imagine the situation: The database is hanging, you cannot find what is going on and
decided to restart the database OR your leader/boss yelled to you do it so, OR you
know the database is going do get down, anyway…
Everyone has passed by this kind of situation at least once. After restart everything
become OK and the ‘problem’ was solved. But now you are being asked about RCA
(what caused this situation?). The database was hanging, so no snap was closed and
you lost the ASH info…
For this cases I think is very useful to take 1 minute before database get down to
generate an ASHDUMP. It’s very simple:
An exemple of execution:
The command below will generate an ASH dump from the last 30 seconds to trace
file. You can also generate an ASHDUMP for minutes by changing the line with
ashdumpseconds by:
The trace file is generated with instructions to import data with SQLLDR. This way you
can realize your ‘Post Mortem’ analysis.
30
An example of ASHDUMP file:
ASHDUMPSECONDS
===================================================== Processing
Oradebug command 'dump ashdumpseconds 30' ASH dump **************** SCRIPT
TO IMPORT **************** ------------------------------------------ Step 1: Create destination
table ------------------------------------------ CREATE TABLE ashdump AS SELECT *
FROM SYS.WRH$_ACTIVE_SESSION_HISTORY WHERE rownum 0
---------------------------------------------------------------- Step 2: Create the SQL*Loader
control file as below ---------------------------------------------------------------- load data infile *
"str '\n####\n'" append into table ashdump fields terminated by ',' optionally enclosed
by '"' ( SNAP_ID CONSTANT 0 , DBID ,
INSTANCE_NUMBER , SAMPLE_ID , SAMPLE_TIME
TIMESTAMP ENCLOSED BY '"' AND '"' "TO_TIMESTAMP(:SAMPLE_TIME
,'MM-DD-YYYY HH24:MI:SSXFF')" , SESSION_ID ,
SESSION_SERIAL# , SESSION_TYPE , USER_ID ,
SQL_ID , SQL_CHILD_NUMBER , SQL_OPCODE
, FORCE_MATCHING_SIGNATURE , TOP_LEVEL_SQL_ID ,
TOP_LEVEL_SQL_OPCODE , SQL_PLAN_HASH_VALUE ,
SQL_PLAN_LINE_ID , SQL_PLAN_OPERATION# ,
SQL_PLAN_OPTIONS# , SQL_EXEC_ID , SQL_EXEC_START
DATE 'MM/DD/YYYY HH24:MI:SS' ENCLOSED BY '"' AND '"' ":SQL_EXEC_START"
, PLSQL_ENTRY_OBJECT_ID , PLSQL_ENTRY_SUBPROGRAM_ID ,
PLSQL_OBJECT_ID , PLSQL_SUBPROGRAM_ID ,
QC_INSTANCE_ID , QC_SESSION_ID ,
QC_SESSION_SERIAL# , EVENT_ID , SEQ# ,
P1 , P2 , P3 ,
WAIT_TIME , TIME_WAITED ,
BLOCKING_SESSION , BLOCKING_SESSION_SERIAL# ,
BLOCKING_INST_ID , CURRENT_OBJ# ,
CURRENT_FILE# , CURRENT_BLOCK# ,
CURRENT_ROW# , TOP_LEVEL_CALL# ,
CONSUMER_GROUP_ID , XID ,
REMOTE_INSTANCE# , TIME_MODEL ,
SERVICE_HASH , PROGRAM , MODULE ,
ACTION , CLIENT_ID , MACHINE ,
PORT , ECID ) --------------------------------------------------- Step 3: Load
the ash rows dumped in this trace file --------------------------------------------------- sqlldr
userid/password control=ashldr.ctl data= errors=1000000
--------------------------------------------------- #### 4092499541,1,93736863,"06-15-2016 16
:58:00.581442000",118,13423,1,152,"a3dj32s553jwz",0,3,16794496187212003770,"",
0,3121342805,1,20,0,27310348,"06/15/2016 16:57:59",0,0,0,0,0,0,0,310662678,642,1
415053318,9371681,422864,0,511985,590,62515,1,289642,7,1595,0,94,12553,,0,10
24,3427055676,"","","","","devapp16",35734,"" ####
4092499541,1,93736863,"06-15-2016 16:58:00.581442000",309,869,1,0,"",65535,0,0,
"",0,0,0,0,0,0,"",0,0,0,0,0,0,0,112941199,13,0,0,0,0,499675,4294967295,0,1,4294967
31
295,0,0,0,86,12553,,0,0,3427055676,"sqlplus@devdb09 (TNS
V1-V3)","sqlplus@devdb09 (TNS V1-V3)","","","devdb09",0,"" #### *** 2016-06-15
16:58:13.931 Oradebug command 'dump ashdumpseconds 30' console output:
Matheus.
32
SYSTEMSTATE DUMP
Hi guys!
I already posted about Hang Analyze ( part1 , part2 ) and ASHDUMP . Now, in the
same ‘package’, let me show you about SYSTEMSTATE DUMP.
Systemstate is basically made by the process state for all process in instance (or
instances) at the time the systemstate is called.
Through a systemstate it’s possible to identify enqeues, rowcache locks, mutexes,
library cache pins and locks, latch free situations, and other kind of chains.
It’s a good tool to add in a SR, but it’s quite hard to habituate on reading/interpreting
the file. To undertand exactly how to read a systemstate I’d recommend you the best:
Read the manual!
The doc Reading and Understanding Systemstate Dumps (Doc ID 423153.1) has
a very good explanation with examples, I’m not able to to it better.
What I can do is share about the SYSTEMSTATE levels. I had some difficult to find
it…
But before I show how you can do it, it’s important to mention that Oracle does not
recommend you to use ‘numeric events’ without a SR (MOS), according to Note:
75713.1.
Level
Content
10
dump
11
256
258
266
33
256 + 10 - short stack + dump
267
Levels 11 and 267 will dump global cache, will generate a large trace file, under
normal circumstances is not recommended.
Under normal circumstances, if the process is not too much, it is recommended to use
266 because it can dump out the process of the function stack, it can be used to
analyze the process in what to do.
But the more time-consuming to generate short stack, if the process is very large,
such as the 2000 process, it may take more than 30 minutes. In this case, you can
generate a level 10 or level 258, level 258 will collect more than level 10 short short
stack, but less than level 10 to collect some lock element data.
To generate it:
An example of execution:
34
An example of a SYSTEMSTATE level 266 dumpfile:
Matheus.
35
Upgrade your JDBC and JDK before
Upgrade your Database to 12c Version!
Ok, now it’s everyone upgrading to 12c, right? Thanks God, this version was released
in 2013!
But there is some things to be aware when planning an upgrade, specially regarding
old applications and legacy. But pay attention! Not all of the requirements are
necessary inside database. It’s the case os JDBC version requirement.
The database 12c documentation explicit mentions that JDBC versions 11.1.x and
below are not supported anymore. It doesn’t mean that they don’t work, it’s only
unsupported and you’ll have no assistance from MOS if you need. It’s better to avoid,
right?
Anyway, if you check the JDBC support matrix, if you are in version 11.2 or below you
are not supported since August/2015. So the Database 12c is helping you, that don’t
have patching policy, to keep on right way. Thanks to Database 12c!
If this is your situation, I highly recommend you to upgrade the directly to JDBC
version 7, the last available by now. See JDBC matrix version as:
Why? Because JDBC also have his compatibility matrix. JDBC 7, for example,
demands your JDK to be at least in version 7 (released in 2011!). So, it’s needed to be
at least in JDK version 6, as you can see below.
36
(Click in the image to access the link)
OK doke?
Matheus.
37
Unplug/Plug PDB between different Clusters
Everyone test, write and show how to move pluggable databases between containers
(CBDs) in the same Cluster, but a little more than a few write/show about move
pluggable databases between different clusters, with isolated storage. So, let’s do
that:
OBS: Just to stay easy to understand, this post is about migration of a Pluggable
Database (BACENDB) from a cluster named ORAGRID12C and a Container
Database named INFRACDB to the Cluster CLBBGER12, into Container CDBBGER.
(Click on images to get it bigger)
1. Access the container INFRACDB (Cluster GRID12C) and List the PDBs:
2. Shutdown BACENDB:
(of course it does’n worked with a normal shutdown. I don’t know what I was
thinking… haha)
3. Unplug BACENDB (PDB) to XML (must be done from Pluggable, as you see…)
38
4. Created an ACFS (180G) to use as “migration area” mounted on “/migration/” in
ORAGRID12C cluster:
39
7.2 How about the Datafiles?
40
9. Dropping Pluggable from INFRACDB:
That’s Okey? Of course there is a few other ways to copy the files from an infra to
another, like scp rather than mount.nfs, RMAN Copy, or other possibilities…
By the way, one of the restrictions of pluggable migration is to use the same endian
format. Buut it’s possible to use RMAN Convert Plataform and convert datafiles to a
filesystem, isn’t?
So, I guess it’s not a necessary limitation. Must to test an write another post… haha
About the post, this link helped, but, again, don’t mention about “another”
cluster/infra/storage.
Matheus.
41
Database Migration/Move with RMAN: Are
you sure nothing is missing?
Forced by the destiny to make a migration using backup/restore (with a little
downtime), how to be sure nothing will be lost during the migration?
Here is a way: Create your own data just before migrating.
Seems like a kludge and it is.. haha.. But it works. Take a look:
# Original Database
SQL shu immediate; Database closed. Database dismounted. ORACLE instance shut
down. SQL startup restrict; ORACLE instance started. Total System Global Area
2689060864 bytes Fixed Size 2229520 bytes Variable Size 1996491504 bytes
Database Buffers 671088640 bytes Redo Buffers 19251200 bytes Database mounted.
Database opened. SQL create table matheus_boesing.migration (text varchar2(10));
Table created. SQL insert into matheus_boesing.migration values ('well done!'); 1 row
created. SQL commit; Commit complete. SQL alter system switch logfile; System
altered. SQL / System altered. SQL / System altered. SQL / System altered. SQL /
System altered. SQL / System altered. SQL shu immediate; SQL exit; $ rman target /
connect catalog rman_mydb/password@catalogdb run { backup archivelog all;}
# Destination Database
And be Happy!
Matheus.
42
Vulnerability: Decrypting Oracle DBlink
password (<11.2.0.2)
Hi all,
It’s not a new vulnerability, but a good thing to have personal note about it. Besides
the security problem, it can save you from situations you need but don’t have the
database link password.
It works only if the database link was created pre-11.2.0.2.
The vulnerability only is exposed if user has one of the follow privileges:
SYS
SYSDBA
DBA
SYS WITHOUT SYSDBA
SYSASM
EXP_FULL_DATABASE
DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE
Starting with 11.2.0.2, Oracle changed the hashes format for database link passwords,
solving this vulnerability. But it only apply to dblinks created in this version or higher.
If you have dblink created when database was on 11.2.0.1, for example, and upgrade
the database for 11.2.0.4, the problem remains until you recreate the database link.
So, if you are upgrading database from 11.2.0.1 or lower to 11.2.0.2 or higher,
remember to reacreate database links!
The vulnerability was exposed in 2012 by Paul Wright. Here is his PoC .
And there is his post .
43
To make it different, below I made the same test (using a PLSQL block, to make it
prettier) with an upgraded database, from 11.2.0.1 to 11.2.0.4:
Note that the simple upgrade does not solve the question. Is needed to recreate
database link.
Matheus.
44
Ordering Sequences over RAC – Hang on
‘DFS lock handle’
Hi all!
Whats up?
I had a fun weekend. So, some things to write about.
This post is just to show an exerience with the event ‘DFS lock handle’, related to
sequence ordering over the cluster nodes.
When I started, I just used the global service name (dedicated connection) and the
scanlistener. The result was distribuiting connections over the 5 nodes of the cluster.
Bad idea.
In the first, I suspected about the concurrency by the sequence over different nodes
(could occour if the node caches are too small), based on a few XA transaction bugs
involving this event.
By the way, if you’re facing this hang with XA transactions, please take a look on “
High rdbms ipc reply and DFS lock handle in 11gR2 RAC With XA Enabled
Application (Doc ID 1361615.1) “.
It can be solved by setting “_clusterwide_global_transactions” to FALSE.
It’s recommendable, additionally, to read the Best Practices for Using XA with RAC
.
proddb4 @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE
------ ---------- ---------- ----------------- -------------- ----------- ---------------- -----------------
9386 147 4 DFS lock handle 9zr9vpvmkqzkv VALID 10968 4
9499 179 4 DFS lock handle fqk9y9q7u2d5c VALID 11082 4
8821 153 4 DFS lock handle 2jd84taf7krh2 VALID 13902 4
22442 1155 3 DFS lock handle 8ycpfxq2jthq3 VALID 9067 3
9860 1339 3 DFS lock handle 2jmzv23ug9kth VALID 10299 3
9772 1529 3 DFS lock handle 802kn9htah6pt VALID 22442 3
22543 1673 5 DFS lock handle 6tgvwkt6cqngk VALID 3074 5
45
22307 135 5 DFS lock handle 5b3zgqgq7bbdz VALID 3665 5
21010 91 5 DFS lock handle gkmycubvn9aa3 VALID 3546 5
9508 1459 3 DFS lock handle 7cw6bcjsf8xf2 VALID 10387 3
10299 4669 3 DFS lock handle 7y2tnuckh37wp VALID 11795 3
121 139 5 DFS lock handle 6tgvwkt6cqngk VALID 3310 5
596 113 5 DFS lock handle 8yqbzu29shvnm VALID 2603 5
360 113 5 DFS lock handle dv49pafm9z8zy VALID 596 5
10740 3177 3 DFS lock handle c6q65hnq0ju7x VALID 11707 3
9838 181 4 DFS lock handle aqa7afq2upkuq VALID 9386 4
714 77 5 DFS lock handle ft8xzyzhycpn2 VALID 360 5
9951 147 4 DFS lock handle 697mts944db7y VALID 9725 4
950 109 5 DFS lock handle cd2gsz5rb2qw9 VALID 3192 5
10387 1529 3 DFS lock handle 2tqnrbh0x60dp VALID 12238 3
10064 143 4 DFS lock handle d833wg4u9cfyb VALID 10649 1
833 1503 5 DFS lock handle 7ynbg2t4taxha VALID 2366 5
10649 53 1 DFS lock handle 0sgzmj1tbx4rh VALID 10737 1
2249 149 5 DFS lock handle aa6jr8ugxaz4z VALID 833 5
9612 175 4 DFS lock handle d2nrr4gtdjq9b VALID 9499 4
10825 57 1 DFS lock handle acmyc4sw7zzc2 VALID 10649 1
2603 1415 5 DFS lock handle fg47vs5wa8zq8 VALID 22307 5
2485 65 5 DFS lock handle 702x9zwtfktu6 VALID 714 5
10737 55 1 DFS lock handle bthxrpmz0ug63 VALID 12148 3
Ok doke, let’s cancel the sessions and rerun the process just in node node (by SID). It
should solve the small caches over cluster hang, without need to modify the
sequence, right?
Beeep . Wrong:
proddb4 @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE
46
------ ---------- ---------- --------------- ------------- ----------- ---------------- -----------------
2494 53953 4 DFS lock handle fc3cam368zsp6 UNKNOWN
6561 32113 4 DFS lock handle f618p0hd4xsy0 UNKNOWN
9269 111 4 DFS lock handle fkn8hxbsfkfnz UNKNOWN
9047 175 4 DFS lock handle fqk9y9q7u2d5c VALID 8931 4
459 12605 4 DFS lock handle 5b3zgqgq7bbdz VALID 9271 4
1929 305 4 DFS lock handle 6tgvwkt6cqngk VALID 8026 4
7349 1013 4 DFS lock handle 802kn9htah6pt UNKNOWN
7800 175 4 DFS lock handle 0hc1bmqj1fp4f UNKNOWN
21475 17349 4 DFS lock handle cfh3r4sq788vu VALID 9042 4
8026 641 4 DFS lock handle 6tgvwkt6cqngk VALID 459 4
14919 59 4 DFS lock handle gkmycubvn9aa3 VALID 15373 4
15032 2267 4 DFS lock handle 9zr9vpvmkqzkv VALID 7688 4
15145 2411 4 DFS lock handle ddkqx4xttc9s9 UNKNOWN
15373 1657 4 DFS lock handle 2jd84taf7krh2 VALID 15713 4
8934 157 4 DFS lock handle 8ycpfxq2jthq3 VALID 1929 4
15826 551 4 DFS lock handle d8dhmr2sx08xq VALID 9612 4
15713 3357 4 DFS lock handle 2jmzv23ug9kth VALID 10177 4
8821 155 4 DFS lock handle 9fpmw9cwak21s UNKNOWN
16050 7007 4 DFS lock handle 4t5qkth35r2um VALID 8705 4
2042 1269 4 DFS lock handle 7cw6bcjsf8xf2 UNKNOWN
What a hell!
Lets take a look in one of the sqls to find the sequence…
ORDER !
Man, of course. It create a several control over the nodes just to keep the sequence in
order, as explained in this post by Christo Kutrovsky .
47
proddb4 alter sequence SEQ_OWNER.SEQ_NAME noorder;
Sequence altered.
Then, TAADÃÃ!
proddb4 @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE
----- ---------- ------- ------------------------ ------------- ----------- ---------------- -----------------
15145 2411 4 library cache: mutex X ddkqx4xttc9s9 UNKNOWN
15032 2267 4 library cache: mutex X 9zr9vpvmkqzkv UNKNOWN
14919 59 4 library cache: mutex X gkmycubvn9aa3 NOT IN WAIT
9269 111 4 library cache: mutex X fkn8hxbsfkfnz UNKNOWN
9047 175 4 library cache: mutex X fqk9y9q7u2d5c UNKNOWN
8934 157 4 library cache: mutex X 8ycpfxq2jthq3 UNKNOWN
8821 155 4 library cache: mutex X 9fpmw9cwak21s UNKNOWN
8026 641 4 library cache: mutex X 6tgvwkt6cqngk UNKNOWN
7800 175 4 library cache: mutex X 0hc1bmqj1fp4f UNKNOWN
7349 1013 4 library cache: mutex X 802kn9htah6pt UNKNOWN
2042 1269 4 library cache: mutex X 7cw6bcjsf8xf2 UNKNOWN
9160 1205 4 library cache: mutex X 6tgvwkt6cqngk NOT IN WAIT
9042 293 4 library cache: mutex X 4jc1u6n2qx94z UNKNOWN
10177 5611 4 library cache: mutex X c6q65hnq0ju7x UNKNOWN
9271 235 4 library cache: mutex X d2nrr4gtdjq9b NOT IN WAIT
9951 1191 4 library cache: mutex X bkdhxhhqdbqb9 UNKNOWN
8931 291 4 library cache: mutex X 697mts944db7y UNKNOWN
9838 1315 4 library cache: mutex X 6q6r7ht1hnctg NOT IN WAIT
8818 325 4 library cache: mutex X 2tqnrbh0x60dp UNKNOWN
Of course we’re having some mutex x, but it’s a lot better then DFS lock, and the
process just “go”.
Matheus.
48
Infiniband Error: Cable is present on Port
“X” but it is polling for peer port
Facing this error? Let me guess: Ports 03, 05, 06, 08, 09 and 12 are alerting? You
have a Quarter Rack? Have recently installed Exadata plugin to version 12.1.0.3 or
higher?
Don’t panic!
In Quarter Racks, the following ports 3, 5, 6, 8, 9 and 12 are usually cabled ahead of
time, but not terminated. In some racks port 32 may also be unterminated. Checking
for incident in OEM you might see something like this image:
Or, as prefer, you can go on command line with a listlinkup on infiniband switch with
ILOM CLI interface:
49
Connector 10A Present Switch Port 16 is up (Enabled) Connector 11A Present Switch
Port 18 is up (Enabled) Connector 12A Present Switch Port 11 is up (Enabled)
Connector 13A Present Switch Port 09 is down (Enabled) Connector 14A Present
Switch Port 07 is up (Enabled) Connector 15A Present Switch Port 05 is down
(Enabled) Connector 16A Present Switch Port 03 is down (Enabled) Connector 17A
Present Switch Port 01 is up (Enabled) Connector 0B Not present Connector 1B Not
present Connector 2B Not present Connector 3B Not present Connector 4B Present
Switch Port 27 is up (Enabled) Connector 5B Present Switch Port 29 is up (Enabled)
Connector 6B Present Switch Port 36 is up (Enabled) Connector 7B Present Switch
Port 34 is up (Enabled) Connector 8B Not present Connector 9B Present Switch Port
13 is up (Enabled) Connector 10B Present Switch Port 15 is up (Enabled) Connector
11B Present Switch Port 17 is up (Enabled) Connector 12B Present Switch Port 12 is
down (Enabled) Connector 13B Present Switch Port 10 is up (Enabled) Connector
14B Present Switch Port 08 is down (Enabled) Connector 15B Present Switch Port 06
is down (Enabled) Connector 16B Present Switch Port 04 is up (Enabled) Connector
17B Present Switch Port 02 is up (Enabled)
Basically 2 options:
# disableswitchport 13A Disable connector 13A Switch port 9 reason: Blacklist Initial
PortInfo: # Port info: DR path slid 65535; dlid 65535; 0 port 9
LinkState:.......................Down PhysLinkState:...................Polling
LinkWidthSupported:..............1X or 4X LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or
10.0 Gbps LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................2.5 Gbps After PortInfo set: # Port info: DR path slid
65535; dlid 65535; 0 port 9 LinkState:.......................Down
PhysLinkState:...................Disabled #
50
A good reference for the commands is the Doc: Controlling the InfiniBand Fabric .
I’ll aso recommend, of course, the MOS 12c: Red Arrow Down Status on IB ports
or False Alert “Cable Is Present On Port ‘N’ But It Is Polling For Peer Port” (Doc
ID 1514940.1) , besides the already mentioned “Bug” note in MOS.
See you!
Matheus.
51
After adding Datafile in Primary the MRP
Stopped in Physical Standby (Dataguard)
Hi all!
After add a datafile in PRIMARY database, the STANDBY MRP stopped. An “ALTER
DATABASE RECOVER MANAGED STANDBY DATABASE” does not solved te
problem, as you see:
52
15: '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' Completed: ALTER
DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
SESSION Recovery Slave PR00 previously exited with exception 1111 MRP0:
Background Media Recovery process shutdown (MYDB_DG)
Solved!
See alert log:
KB:
Managing Primary Database Events That Affect the Standby Database
53
Matheus.
54
Lock by DBLink – How to locate the remote
session?
And if you identify a lock or other unwanted operation by a DBLink session, how to
identify the original session in remote database (origin dabatase)?
The one million answer is simple: by process of v$session. By the way, looks like is
easier than find the local process (spid)… Take a look in my example (scripts in the
end of post):
dest @sid Sid:10035 Inst:1 SEQ# EVENT MODULE STATUS SID SERIAL# INST_ID
----- --------- ---------- ---------- ---------- ---------- ---------- 29912 SQL*Net message from
client oracle@origin2(TNS V1-V3) INACTIVE 10035 35 1 dest @spid SPID SID PID
PROCESS_FOR_DB_LINK MACHINE LOGON_TIME ------ ---------- ---------- -----------
----------- ----------- 16188960 10035 882 17302472 origin2 24/08/2015 07:43:40
Now I know the sid 10035 refers to local process 16188960 and the process on origin
database is 17302472. What I do what I want if this process:
What include to locae the session in the database by spid, see the sql, and etecetera:
origin @spid2 Enter value for process: 17302472 SID SERIAL# USERNAME
OSUSER PROGRAM STATUS ------- ---------- ----------- ----------- --------------- ----------
7951 41323 USER_XPTO scheduler_user sqlplus@scheduler_app.domain.net (TNS
V1-V3) ACTIVE database2 @sid Sid:7951 Inst: 2 SQL_ID SEQ# EVENT MODULE
STATUS SID SERIAL# INST_ID ---------- ----- --------- ------- --------- ----- ------ ----------
1w1wz2mdunya1 56778 db file sequential read REMOTE_LOAD ACTIVE 7951 41323
2
That’s OK?
Simple isn’t?
The used Scripts (except the “sid”, that is a simple SQL on gv$session):
# spid: col machine format a30 col process format 999999 select p.spid,b.sid, p.pid,
b.process as process_for_db_link, machine, logon_time from v$session b, v$process
p where b.paddr=p.addr and sid=&sid; /
55
See ya!
Matheus.
56
Listing Sessions Connected by SID
When we are preparing to move a database or something like that, it’s useful to know
if there is any session connecting by SID, right?
Matheus.
57
VPD: “row cache objects” latch contention
The other day, we found high occurrence of latch events in our principal/core
environment (11.2.0.3.0). The origins are all “different businesses channels” that
access objects through the use of VPD. The latch events was bit by bit dominating the
environment during the last months and turn on an “attention alarm” to us.
Then we found the the note: Bug 12772404 – Significant “row cache objects” latch
contention when using VPD – superseded (Doc ID 12772404.8).
“When VPD is used, intense row cache objects latch contention (dc_users) may
caused by an internal Exempt Access Policy privilege check. Rediscovery Information:
VPD is in use
Significant “latch: row cache objects” waits occur
The waits are for the latch covering dc_users”
58
environment for every execution.
– Changing the policy function might be helpful in some cases
eg: To use CONTEXT dependent policies instead of DYNAMIC policies”
The problem was definitively solved by applying the 11.2.0.4.2 PSU. No problems
after that.
Good luck, if it’s your situation.
Hugs!
Matheus.
59
Compilation Impact: Object Dependencies
Hi all!
It’s not necessarily the DBA function, but how often someone of business came and
ask you wich is the impact on recompiling one or other procedure?
It probably happen because the DBA usually make some magic and have a better
understanding about objects relationship. It happens specially in cases there is no
code governance…
So, you don’t have to handle all responsability and can switch some of that with
developer, through DBA_DEPENDENCIES view.
The undertstanding is easy: The depended objects and the refered objects. If ou
change the refered, all depended will be impacted by.
GREPORADB @dependencies Enter value for owner: GREPORA Enter value for
obj_name: TABLE_EXAMPLE OWNER Name TYPE DEPE REFERENCED
REFERENCED_OWNER REFERENCED_NAME ------------------
----------------------------------- ---------- ---- ---------- ------------------
----------------------------------- GREPORA TOTALANSWEREDQUESTIONS FUNCTION
HARD TABLE GREPORA TABLE_EXAMPLE GREPORA USERRESPONSESTATUS
FUNCTION HARD TABLE GREPORA TABLE_EXAMPLE GREPORA
VW_INPROGRESSFEEDBACKOPTS VIEW HARD TABLE GREPORA
TABLE_EXAMPLE GREPORA EVENTSTARTDT FUNCTION HARD TABLE
GREPORA TABLE_EXAMPLE GREPORA HAVEUSERANSWEREDANYTHING
FUNCTION HARD TABLE GREPORA TABLE_EXAMPLE
Nice, hãn?
## @dependencies col owner for a18 col name for a35 col type for a10 col
referenced_owner for a18 col referenced_name for a35 col referenced_type for a10
select owner,name,type,dependency_type,referenced_type,referenced_owner,referen
ced_name from dba_dependencies where referenced_owner like upper('%&owner;%')
and referenced_name like upper('%&OBJ;_NAME%');
See ya!
Matheus.
60
RAC on AIX: Network Best Practices
Hi all!
A few time ago I passed by some performance issues on AIX working with instances
with different configuration (proc/mem). The root cause was basically the inefficient
configuration of networking for interconnect (UDP).
As you know, the UDP is a non-response (for that reason with less metadata and
faster) protocol. By the default, every server have a pool to send udp (and tcp)
messages and another to recieve.
In my situation, once there was an ‘inferior’ instance, the pools were automatically set
smaller in this one, and it was causing a high interconnection block sending statistics
from other instances. In deed, it was lots of resends caused by overflows in this
smaller instance…
There is one one to evaluete how much loss are you having for UDP in your AIX
server:
This and others details about configuring RAC on AIX ban be found in note: RAC and
Oracle Clusterware Best Practices and Starter Kit (AIX) (Doc ID 811293.1)
61
Grepping Entries from Alert.log
Hey hey,
One more McGayver by me! Haha
Again to find some information in alert. This time, I’m looking to count and list all
occurrences of an action in alert. To archive this, I made the script below.
The functionality is just a little bit more complex than the script of the last post, but
stills quite simple. Take a look:
Parameters:
PAR1 : name of alert (the main alert.log)
PAR2 : Searched token
PAR3 : Start day you want to, in the format “Mon dd” or just “Mon”. Below an example.
PAR4 : Start Year (4 digits)
PAR5 : [optional]End day you want to, in the format “Mon dd” or just “Mon”. The
default value is “until now”.
PAR6 : [optional]End Year (4 digits). The default value is “until now”. If you use the
PAR5, you have to use PAR6.
PAR7 : [optional] List All entries and when?. If you want to use this PAR, you must to
use PAR5 and PAR6.
# Script grep_entries_alert.sh
62
if [ $# -lt 6 ]; then
FIN=`cat $1 |wc -l`
else FIN=`cat $1 |grep -n $5 |grep $6$ |head -n 1 |cut -d':' -f1`
fi
BEG=`cat $1 |grep -n "$3" |grep $4$ |head -n 1 |cut -d':' -f1`
NMB=`expr $FIN - $BEG`
ENTR=`cat $1 |head -n $FIN |tail -$NMB| grep $2|wc -l`
echo Number of Entries: $ENTR log.log
if [ $# -lt 7 ]; then
echo ------- Complete List Of Entries and When ---------- log.log
for line in `cat $1 |head -n $FIN |tail -$NMB| grep -n $2|cut -d':' -f1`;do
LR=`expr $line + $BEG` # To get "real line", without the displacement
DAT=`expr $LR - 1` # To get line date of entry
echo awk \'NR==$DAT\' $1 aux.sh # Printing the lines just calculted
echo awk \'NR==$LR\' $1 aux.sh # with aux.sh
done;
sh aux.sh log.log
fi
cat log.log
(Hahahaha)
Matheus.
63
Grepping Alert by Day
Hi all,
For that moment when your alert is very big and some OS doesn’t “work very well with
it” (in my case was using AIX), I jerry-ringged the shellscript bellow. It puts in a new
log just the log entries of a selected day.
The call can be made with two or three parameters, this way:
Parameters:
PAR1:
Examples:
Ex1: sh grep_day.sh alert_xxdb_1.log “Apr 12”
Ex2: sh grep_day.sh alert_xxdb_1.log “Apr 12” 2014
Generated files:
dalert_2015Apr12.log
dalert_2014Apr12.log
# Script grep_day.sh
if [ $# -lt 3 ]; then
YEAR=`date +"%Y"`
else YEAR=$3
64
fi
DATEFORMAT=`echo $2|cut -d' ' –f1`""`echo $2|cut -d' ' –f2`
BEG=`cat $1 |grep -n "$2" |grep $YEAR |head -1 |cut -d':' -f1`
FIN=`cat $1 |grep -n "$2" | grep $YEAR |tail -1 |cut -d':' -f1`
NMB=`expr $FIN - $BEG`
cat $1 |head -$FIN |tail -$NMB dalert_$YEAR$DATEFORMAT.log
See ya!
Matheus.
65
Searching entries on Alert.log: A Better Way
Hi all!
As the oldest readers know, someday I had to found some entries in the alertlog and I
had a really big log. So I jerrry-ringed some scripts for grepping alert with auxiliar files
and etc.
I can see the posts here: Grepping Alert by Day and Grepping Entries from Alert.log .
So… They are functional, but probably the worst ways to get it. I didn’t know and was
innocent to not search by the view x$dbgalertex t.
There is also possible to write on alert through the procedure
SYS.DBMS_SYSTEM.KSDWRT .
Ok, so let me fix this situation with theese two good guys: @write_alert and
@find_alert
greporadb @write_alert Enter value for text: GrepOra.com best blog ever! PL/SQL
procedure successfully completed.
greporadb @find_alert Enter value for inst: 1 Enter value for host: Enter value for
message: GrepOra.com ORIGINATING_TIMESTAMP Inst# HOST_ID
MESSAGE_TEXT ---------------------------------------- ----- ---------------
--------------------------------------- 13/06/16 16:53:13,699 +00:00 1 greporasrvr
GrepOra.com best blog ever! 1 row selected.
## find_alert.sql col ORIGINATING_TIMESTAMP for a40 col host_id for a15 col
inst_id for 99 col MESSAGE_TEXT for a100 set linesize 500 SELECT
originating_timestamp,inst_id,host_id,message_text FROM x$dbgalertext where 1=1
and inst_id like '%&INST;%' and upper(host_id) like upper('%&host;%') and
upper(message_text) like upper('%&message;%') order by record_id asc;
Ok, fixed!
See ya!
Matheus.
66
Alter (Fix) Oracle Database Date
When you haven’t access to SO and just have to alter database date…
# Fix Date:
# Unfix Date:
OBS: Just to make it clear: The date will be really “fixed”. The time will “stop”.
Seconds, minutes will not advance…
Matheus.
67
Explain ORA-XXX on SQL*Plus
For those when the error is unkown/rare, SQL*Plus helps us. It’s just call “oerr” from
OS.
68
Oracle Database Licensing: First Step!
Oracle licensing is always a complex question, right?
I made some search about it today and decided to share, on quick mode. As I usually
do. I focused on Database, by the way.
Ok, now the best way to undestand how evaluate my environment is searching on
Oracle Support, right?
And it do not disappoint: Database Options/Management Packs Usage Reporting
for Oracle Databases 11gR2 and 12c (Doc ID 1317265.1)
In this note you can get a complete and actual script used to evaluate
features/options/packs utilization ( options_packs_usage_statistics.sql ). This is a
good way if you are preparing for an auditing…
I made some simple queries to validate/understand results from Oracle Script. So, if
you don’t have access to Oracle Support, it might help you:
69
col parameter for a50 select parameter,value from v$option --where value='TRUE' --
To get used options only /
select cpu_count_current,
cpu_core_count_current,CPU_SOCKET_COUNT_CURRENT, CPU_COUNT_HIGHW
ATER,CPU_CORE_COUNT_HIGHWATER,CPU_SOCKET_COUNT_HIGHWATER
FROM v$license;
An interesting point is that you can disable and enable options through the command
chopt. But, you must to get database down first. Example to disable partitioning
option:
Some time ago I wrote a post about evaluating Database license in all database park
through OEM . It remains valid, I recommend you take a look in this post too.
Matheus.
70
Getting Oracle Parameters: Hidden and
Unhidden
Today’s post is a quick post!
Very quick post! very very quick post!
But it’s a helpful post!
Matheus.
71
Application Hangs: resmgr:become active
Application APP hangs with resmgr:become active . There is a resource plan defined
who has a specific group to this Application. What is wrong and how to fix?
Here I presume you what is a resource manager and a resource plan. And, of course,
for what purpose they exists. You must to know that this event is related to high active
sessions in the group of resource plan too.
BEGIN DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping ( attribute =
DBMS_RESOURCE_MANAGER.oracle_user, --
DBMS_RESOURCE_MANAGER.service_name (or a lot of possibilities. Google it!)
value = 'MYAPP', consumer_group = 'APP_PLAN');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area; END; /
72
SELECT 'EXEC
DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS
('''||SID||''','''||SERIAL#||''',''APP_PLAN'');' FROM V$SESSION where
username='MYAPP' and RESOURCE_CONSUMER_GROUP='OTHER_GROUPS';
Remember that creating a resource plan without making the mappings is a bit
pointless…
Matheus.
73
How to Prevent Automatic Database Startup
This is a quick post!
– About Oracle Restart
– Reference to SRVCTL
Ok!
In a nutshell, my notes:
Once the database is registered, change the management policy for the
database to manual:
srvctl modify database -d $DBNAME -y manual
Matheus.
74
TFA – Collecting Period
I like quick posts, you already know that. It’s like a quick memo to myself in the future.
See ya!
Matheus.
75
ARCH Process Killed – Fix Without Restart
Hi all,
What if your arch processes hangs or get killed? How to keep archive going without
restart database?
Take a look…
Problem:
Solution:
Increase your arch processes number…
Matheus.
76
DBA_TAB_MODIFICATIONS
Do you know the view “dba_tab_modifications”?
It’s very useful to know what has changed since the last stats gathering of a table and
all decision/information that comes with… See the example below..
mydb create TABLE matheus_boesing.test (nro number); Table created. mydb begin
2 for i in 1..1000 loop 3 insert into matheus_boesing.test values (i); 4 end loop; 5
commit; 6 end; 7 / PL/SQL procedure successfully completed. mydb select
table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where
table_name ='test' and table_owner='MATHEUS_BOESING'; no rows selected mydb
exec dbms_stats.flush_database_monitoring_info; PL/SQL procedure successfully
completed. mydb select table_owner,table_name,inserts,updates,deletes from
dba_tab_modifications where table_name ='test' and
table_owner='MATHEUS_BOESING'; TABLE_OWNER TABLE_NAME INSERTS
UPDATES DELETES ---------------------- -------------- ---------- ---------- ----------
MATHEUS_BOESING test 1000 0 0 mydb EXEC
DBMS_STATS.GATHER_TABLE_STATS('MATHEUS_BOESING','test'); PL/SQL
procedure successfully completed. mydb select
table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where
table_name ='test' and table_owner='MATHEUS_BOESING'; no rows selected
77
Oracle – Lost user’s password?
Hi everyone,
• Save the password hash - Change - Perform what you need - Change back to the
original password using the hash.
PS: The second approach might be more risky because the password may be set into
some application, datasource, etc… So be aware of the impact before actually
changing the password.
This is very simple, and in order to do that, you have to connect as sysdba:
sqlplus / as sysdba
Then you will say to the database: “Alright mate, now you will connect to the user A,
through the user B, even not knowing user’s A password”, with the following
command:
78
alter userA grant connect through userB;
By performing this command, you’ll be able to access the user A, through the user B.
But how does the connection works?
conn userB[userA]/passB@database
See that we have put the schema’s name in [ ]’s. This is how it works. Once you
connect to the database and run:
show user
As said before, this one should be faced more carefully, as it might affect something,
because we will temporarily change the password of the user.
First of all, connect to the database with an user who have “grant select any
dictionary” or at least grant select on dba_users. Then run:
Now that you have the CURRENT password hash saved, change the users password:
Doing that, you will be able to connect to the user using the new password. Do what
you need, and when you are done, you need to change the password back to the
original one like this:
Please notice the command VALUES there, using the saved password hash. This is
the command which allow us to set up the user’s password using the hash.
Rafael.
79
Scheduler Job by Node (RAC Database)
Sometimes you want to run something just in one node of the RAC. Here is an
example to do it:
Matheus.
80
ORA-01950 On Insert but not on Create
Table
Sounds weird creating table does not raise any error, but inserting a correct tuple in
this table raise a permission error, right? Just take a look:
It probably it’s a new user or a tablespace for which user doesn’t has quota. But why
table creation doesn’t result in error but only on inserting?
Certainly the database is 11.2 or above, because this mechanism are related to
deferred_segment_creation, introduced in this release. This parameter is default
setted for true, and means that the segments for tables and their dependent objects
(LOBs, indexes) will not be created until the first row is inserted into the table.
So, only when allocating segment for the first insert database will check privileges on
tablespace.
It’s a good way to save space. But it causes too some situations when exporting with
EXP, like described here .
Anyway, I think Oracle could implement segment validation when create table, it’ll
avoid a lot of misunderstanding…
Now, create a table doesn’t implies in the insert will happen successfully, unless you
disable the deferred_segment_creation and bring back the behavior from earlier
versions:
See ya!
Matheus.
81
Adding datafile hang on “enq: TT –
contention”
Yesterday a colegue asked me about “enq: TT – contention” event on his session that
is adding a a datafile in a tablespace wich run out of space in a 11.1.0.7 Database.
I’ve faced this situation another time and decided to document it.
Hugs!
Matheus.
82
Quick guide about SRVCTL
Hi everyone!
In order to check ALL the available services already created via SRVCTL we should
use:
Please bear in mind that the does not necessarily match the instance name, so to
make sure about the database name, run:
Example:
If you have more than one database on that server, it will be returned too.
Ok, now let’s try to create a new service name for your database. In the node that you
want to create the service_name, please run the following.
where follow the rule already described above, and you can create as you wish.
The syntax follows the same idea, but we should include different parameter in there,
which is:
-r
Example:
83
srvctl add service -d dbgrepora -s service_dbg -r dbgrepora1,dbgrepora2
Creating the service_dbg service, and checking the status, you’ll have an output like:
srvctl stop service -d -s
Best Regards,
Rafael.
84
Saving database space with ASSM
It’s good way reclaim WASTED space from tables and index using the Segment
Advisor.
Only tablespaces with segment space auto are eligible to Segment Advisor.
It will save some database storage area, and make it more effective cause
by LHWM/HHWM.
Maiquel.
85
Flashback- Part 1 (Flashback Drop)
Hi everyone!
Flashback is a technology that becomes handy to the DBA when you need to recover
the database from logical issues, and it is considered a great feature to use for
recovery scenarios, besides RMAN. Comparing with Recovery Manager (RMAN),
Flashback is way simpler mode to recover from logical issues (end users, most of the
time), when RMAN is better for physical issues. These issues can be like:
And so on… The scenarios are plenty. So in order to understand each of them better,
we’ll explain in details, separately, in different posts, so we don’t get tired of reading
that much
• Flashback Drop
• Flashback Query
• Flashback Table
• Flashback Database
For this Part 1 , we’ll discuss about item 1 only, and in the next posts we will continue
this saga!
Most of the flashback operations are undo-based, so its up to the DBA to set up a
good retention based on his own environment. The steps are:
Okay then, enough with the talking and let’s go right to the point.
86
FLASHBACK DROP
This feature allow us to restore a table that was accidentally dropped, using the
RecybleBin as a source. RecybleBin is basically where your tables and associated
objects (such as indexes, constraints, triggers, etc…) are sent when they are dropped
(yes, they are still in the database somehow, even if you have dropped them). The
Flashback Drop is capable of restoring dropped tables based on the RecycleBin. Ok
GrepOra, but for how long will we gonna have the dropped objects available on the
RecycleBin? They remain available until someone purge it explicitly or due to space
pressure.
Create table:
Check in the RecycleBin, with the following command, the dropped table:
Please, have a look at the OBJECT_NAME column, which now it contains the current
name of the dropped table in the database, and the column ORIGINAL_NAME shows
the name as it was before the drop. This happens because we can have an object
with the same name created and dropped different times, so we can have all its
versions available in case we need a specific one.
To prove this is real, we can simply query the dropped table using the RecycleBin’s
name:
Now we have to actually use the flashback command to restore the dropped table and
make it available again with the right name. To do that, we have some different ways.
87
Note: In case we have different versions of the table with the same name on the
RecycleBin, Oracle will always choose the most recent one. If you want to restore an
older version, you should use the OBJECT_NAME for the operation.
Examples:
SQL flashback table grepora to before drop; Flashback complete. SQL select count(*)
from grepora; COUNT(*) --------- 0
In the example above, we have successfully restored the GREPORA table using its
ORIGINAL_NAME. But what if we had different versions of the same table?
First, let’s drop the table that we have restored, and check it on the RecycleBin.
SQL drop table grepora; Table dropped. SQL select original_name, object_name,
type, droptime from user_recyclebin where original_name='GREPORA'; ORIGINAL_N
OBJECT_NAME TYPE DROPTIME ---------- ------------------------------ ------
------------------- GREPORA BIN$NRxYdbc4hpjgUzvONgrFng==$0 TABLE
2016-06-12:16:20:48
Create the table again, using the same DDL, and then drop it:
Check the RecycleBin. We will find the two versions of our table, in different times.
Check that the ORIGINAL_NAME for both lines are the same. Now we can flashback
any version of the same table, using the OBJECT_NAME :
As we still have the other table and want to restore it as well, we obviously cannot
have the same name for both of them, so we can restore it with the RENAME TO
clause:
88
SQL select table_name from user_tables; TABLE_NAME ------------------------------
GREPORA_2 GREPORA
Please stay tuned for the next Flashback Posts upcoming! We’ll cover it all. I hope it
was all clear to everyone. Thanks for reading and have a wonderful week!
Rafael.
89
Flashback – Part 2 (Flashback Query)
Hey team,
This is the second part of our Flashback Tutorial and today we’re gonna talk about
FLASHBACK QUERY. Please check here for the first post about Flashback Drop .
Let’s go:
FLASHBACK QUERY
In the last Flashback post, we learnt about restoring tables that were dropped from the
database with the RecycleBin facility. But if you think about it, it’s way more likely that
a table suffer an undesirable change , than actually be dropped. Example, when you
UPDATE a table with values that are not correct, or delete values (and commit, of
course), and so on, wouldn’t it be great if we could come back in the past and see how
it was before the change? Thanks to the almighty Oracle Database we can! We can
use the Flashback Query to see how a table was at a specific time in the past. And the
best part of it, is if you are the owner of your table, you can do it yourself, no need to
bother the DBA with that (definetely the best part), and you can correct your own
mistakes. Also, please keep in mind that for FLASHBACK QUERY to work, we
need to have our undo properly configured. To illustrate that, let’s see an example:
SQL insert into grepora values ('value1', 'value2', 'value3'); 1 row created. SQL insert
into grepora values ('line2', 'line2', 'line2'); 1 row created. SQL insert into grepora
values ('line3', 'line3', 'line3'); 1 row created. SQL insert into grepora values
('line4', 'line4', 'line4'); 1 row created. SQL insert into grepora values
('line5', 'line5', 'line5'); 1 row created. SQL commit;
SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ value1
value2 value3 line2 line2 line2 line3 line3 line3 line4 line4 line4 line5 line5 line5
Get the SYSDATE , to know the exact date where you have this amount of data:
Now, let’s make some “mistakes” here, try to change the content of the table, deleting
and updating values:
90
SQL delete from grepora where column1='line5'; 1 row deleted. SQL update grepora
set column1='line1', column2='line1', column3='line1' where column1='value1'; 1 row
updated. SQL commit; Commit complete.
SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ line1 line1
line1 line2 line2 line2 line3 line3 line3 line4 line4 line4
Check that the content data of the table is different from the original version after our
changes. How can we revert that if we didn’t know how it was before that?
We use the famous AS OF TIMESTAMP statement, which allow us to see the table in
a different time in the past.
With the example below, check that after using the clause AS OF TIMESTAMP and
using the date we caught before to DML our table, we can find the same previous
data:
SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ line1 line1
line1 line2 line2 line2 line3 line3 line3 line4 line4 line4
With this feature, we can see how a table was “before the mistake” and do the proper
actions to fix it.
I hope it was clear to everyone, if you have any doubt, please get in touch with
GrepOra and we’ll be glad to help.
For the next post, we’ll be doing a test case for FLASHBACK VERSIONS QUERY!
Stay Tuned!
Rafael.
91
Flashback- Part 3 (Flashback Versions
Query)
Hi Everyone,
Here we are to continue our Flashback Saga! If you lost our first 2 posts about that
and are in the mood for a good reading, please go through the links below:
Flashback Drop
Flashback Query
Today we are going to discuss Flashback Versions Query , which has a strong link
with the previous post, the Flashback Query (AS OF). With this feature, we are able to
verify all changes made between 2 points in time in the past, using SCN or a
Timestamp. Of course, the Flashback Versions Query will retrieve only the committed
data. Just like Flashback Query , the Flashback Versions Query is undo-based, so
make sure your undo Tablespace and undo retention period is good enough for you.
What is the difference between Flashback Query and Flashback Versions Query?
Well, basically using Flashback Query, you’ll see an EXACT point in the past for one
single value. Using the Versions Query, you can see all versions of that value between
two times in the past. Interesting huh?
We have our table already created on the previous Flashback Posts, so let’s use it:
• Insert values into the table, and then get the SCN:
SQL insert into grepora values ('line1', 'line1', 'line1'); 1 row created. SQL commit;
Commit complete.
set echo off feedback off lines 200 pages 0 column scn format 999999999999999
SELECT dbms_flashback.get_system_change_number scn FROM DUAL;
• So, currently our table has only one value, which is:
92
SQL select * from grepora; COLUMN1 COLUMN2 COLUMN3 ------ ------- ------ line1
line1 line1
• We still have only one line in our table, but we have changed it several times, with
the above UPDATE commands.
Now, it’s time. Let’s use this very nice feature to check all the versions of that this
value has during two points in time.
• First, get the SCN again, in order to have the second point in time to compare:
• Now, we can compare all existent values for this table/columns having 2 SCN as
reference (We could also use Timestamp for that).
Done! With this example we could see all the versions of that table between 2 times in
the past using SCN!
For the next post, we will check Flashback TRANSACTIONS query, which can go a
little further than this one. We’ll see a little more next week!
Rafael.
93
Flashback – Part 4 (Flashback Transaction
Query)
Hi all,
If you have missed the previous Flashback posts, please go through these links to find
it and read them if you feel like!
And now, we are half way there to the end of the Flashback posts, let’s see a little
more about FLASHBACK TRANSACTION QUERY .
Being very simple, Flashback Transactions Query is pretty much the same as
Flashback Versions Query, where you can see all changes made between two times
in the past. The difference here is that the TRANSACTION query, facilitate the
rollback of an operation to us by providing the proper SQL to undo it.
FTQ is also undo-based, so as usual, make sure you have the space on the undo
tablespace that fit for you and also the undo_retention that is enough for your
scenario.
There are some things that need to be configured before use the FTQ, so make sure it
is properly set up:
.GRANTS: Any user who might need to use FTQ must have the SELECT ANY
TRANSACTION grant, and also the FLASHBACK privilege on those tables that he
wants to be able to flashback (or FLASHBACK ANY TABLE ).
94
TABLE_OWNER VARCHAR2(32) ROW_ID VARCHAR2(19) UNDO_SQL
VARCHAR2(4000)
See the XID column there? This is our Transaction identifier, so how would we know
the identifier of our transaction if we don’t have this information?
There are some hidden columns on every table named VERSIONS_% that contain
all these informations when we use the VERSIONS BETWEEN, and some of them are
named as:
In order to clarify all of this, let’s use an example to illustrate every statement read
here today.
• Compatibility 10.o
Now, we wanna know the values of some of our hidden columns for GREPORA table
(created on previous posts), such as VERSIONS_XID, in order to identify the
transaction id’s to properly use FTQ. Let’s use the following query to get it:
Obviously, please adjust your script to run between the desired timestamp.
Once you have the informations captured above, we can figure out the transaction id
(XID) and query the FLASHBACK_TRANSACTION_QUERY view, to be able to
rollback our transaction:
95
Please note that we have the UNDO_SQL column, indicating to us the exact
command to be executed to rollback that exact transation . This is awsome, right?
Also, instead of use the XID as a filter, you can use any other hidden column that you
want, or even use the timestamp between two points in time.
Please let us know if you have any doubt on this, and have an awesome week.
Rafael.
96
Flashback – Part 5 (Flashback Table)
Hi everybody,
So let’s do it people.
Flashback Table is a very interesting facility that our almighty Oracle Database
provide us, giving us the ease of flashback a table (obviously) to a point-in-time in the
past or even to an SCN.
An interest part is: If you have dependent values on this table, they will be reverted as
well when you perform the flashback table! Awesome right?
The difference, comparing to all other previous sections until now is: All of them, did
not affect the table as a whole, it was very punctual. Now we have the possibility to go
back with the entire table with only one simple command.
• All triggers are disabled when you perform Flashback Table operation, and they
remain disable regardless they were enabled or disabled. So make sure to identify
the enabled ones before to execute the Flashback Table.
• Enable Row Movement on the table you desire to perform the flashback
97
• Flashback the table to the SCN or Timestamp you caught at the step 2.
Step 1:
Step 2:
Step 3:
SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ line1 line1
line1 line2 line2 line2 line3 line3 line3 line4 line4 line4 line5 line5 line5
Step 4:
SQL update grepora set column1='grepora'; 5 rows updated. SQL commit; Commit
complete.
Step 5:
SQL select * from grepora; COLUMN1 COLUMN COLUMN -------- ------ ------ grepora
line1 line1 grepora line2 line2 grepora line3 line3 grepora line4 line4 grepora line5
line5
Step 6:
Step 7:
SQL select * from grepora; COLUMN1 COLUMN COLUMN -------- ------ ------ line1
line1 line1 line2 line2 line2 line3 line3 line3 line4 line4 line4 line5 line5 line5
PS: If you want to already enable the triggers along with the flashback command,
please use:
And there we go! The table is reverted to its previous position. For this time we used
SCN to flashback.
Also bear in mind that as most of the Flashback operations, this one is also
undo-based, so make sure you have the size and retention that you need.
Please feel free to comment and e-mail us in case of any doubt or suggestion.
98
Have a wonderful week.
Rafael.
99
Flashback – Part 6 (Flashback Database)
Hi people,
Today’s post is gonna be about Flashback Database, a pretty good feature for
non-production levels of your structure, I would say.
It’s very unlikely that you are going to rollback your entire production database to a
point-in-time in the past, right? But if you need to, this facility is there.
For example, I have my DEV/TEST database and I know that my database is running
perfectly fine now, then as a test measure, I change a lot of things and end up
messing up with the database, affecting a lot of ends. Then, as magic, you can move
back your ENTIRE DATABASE with Flashback Database to point in the past where
everything were fine.
Different from all other flashback operations, Flashback database is not undo-based, it
has his own Flashback Logs, that are used to perform these operations. We can see
how far we can go back by querying the V$FLASHBACK_DATABASE_LOG view,
columns OLDEST_FLASHBACK_SCN and OLDEST_FLASHBACK_TIME.
To make sure that you can perform Flashback Database operations, please make
sure that you have enabled the Flashback, as:
• Startup Mount:
• run:
100
• Open the database.
To use flashback operations, make sure that your database is in MOUNT mode ,
otherwise you won’t be able to do so.
Once your database is properly setup for flashback database operations, we have 3
ways to perform this:
• SCN
• Timestamp
• Restore Point
The first two, you must be already familiar, you can go back to a specific past SCN or
a Time in the past using Timestamp. The commands to execute it, follows the
following syntax:
Executing this, you are rolling back your whole database to the point in time defined.
Then you have the Restore Point feature, which is nothing more than YOU,
manually, mark the database at some point, and then turn back to this point. The good
part here, is that you can name this point-in-time as your preference.
Let’s do an example:
The name of our restore point is BEFORE_CHANGES, but it can be named as your
preference. Thinking about our first example for non-production databases, we can
use just like we said:
• Go back in time with the whole database using the restore point created.
To perform the recovery using the Restore Point, you must have your database in
MOUNT mode. Once you have it, you are going to need to execute:
When the Database finish the Flashback Operation, you will need to open the
database with RESETLOGS operation:
101
alter database open resetlogs;
There you go guys, as we could have seen, we have several ways to use the
flashback database operation, and it is very useful for a lot of situations. I have just
illustrated the most common one (for me).
I hope that it has been a good reading for you guys and not boring.
We have only one flashback type left to publish (flashback data archive), and then we
are going to move on to different subjects
Cheers,
Rafael.
102
Flashback – Part 7 (Flashback Data Archive)
Hey everyone,
Finally, the last part of our flashback posts, FLASHBACK DATA ARCHIV E! If you
didn’t have a chance to check the previous posts, please do not hesitate to take a look
if you need or if you just get curious.
The Flashback Data Archive is a great option if you need to keep track of all changes
for a very long time in your database. I mean, when all other Flashback options aren’t
good enough for you and you need to keep way more time of history, you need to use
Flashback Data Archive, which is gonna keep the track of lifetime changes.
Why would I want to use that? Well, one of the options that I see, is about auditing
your DB.
Considering the configuration and use of Flashback Data Archive, we’re gonna list the
steps and then explain them with more details:
• Create a tablespace with enough space for your data archive (It can be an existing
one, but how about we keep ourselves better organized?)
• Create the Flashback Data Archive using the tablespace created on step1 and
define quota to the tablespace (optional) and define the retention of the FDA
(optional).
It is pretty straightforward and simple to configure and use it. So let’s get into the
details:
If you are here reading this post we assume that you already know how to create a
simple tablespace
103
SQL create flashback archive audit_grepora tablespace tbs_grepora_archive quota
25g retention 2 year;
Of course, you can change all the parameters as you need using the ALTER
command, such as:
or
Also, you can clean up your Flashback Data Archive as you need. Imagine that you
are running out of space and your data is too big and you don’t need the oldest data.
Then we can PURGE the flashback data archive using SCN or timestamp:
or
This is the simplest step. If you want your table to use a specific flashback data
archive to keep the history of all its changes, then you need to run the following:
Or if you are creating a new table, just add “flashback archive” in the end of your DDL:
If you want to remove your table from using the FDA, simply do it with alter command:
Imagine that you want to check how that table was 200 days ago? Then just use the
AS OF TIMESTAMP clause in your SELECT statement, already discussed on
previous posts
104
If you want to check Flashback Data Archive information, please go through these
views:
We from GrepOra.com are very grateful to have the opportunity to share knowledge
and experience with everyone and we seriously want to help!
This is the end of Flashback posts. See you next week with some other subject
Rafael.
105
Alert Log: “Private Strand Flush Not
Complete” on Logfile Switch
Hi all!
Just a curiosity: Have you ever noticed in a database alert log the occourance of the
following message for every logfile switch:
Thread 1 cannot allocate new log, sequence 9281 Private strand flush not
complete Current log# 5 seq# 9280 mem# 0:
/db/u5001/oradata/GREPORADB/redo05a.log Thread 1 advanced to log sequence
9281 (LGWR switch) Current log# 6 seq# 9281 mem# 0:
/db/u5001/oradata/GREPORADB/redo06a.log
It happens because before every switch of logfile all private strands have to be flushed
to current log.
It’s well described by the docs Alert Log Messages: Private Strand Flush Not
Complete (Doc ID 372557.1 ) and Manual Log Switching Causing “Thread 1
Cannot Allocate New Log” Message in the Alert Log (Doc ID 435887.1) .
So, it’s an expected behavior and normal to transactional environments, don’t worry!
It’s simple to be reproduced too… Take a look:
session1 update teste set a=5 where a=2; 1 row updated. session2 select 2 t1.sid, 3
t1.username, 4 t2.xidusn, 5 t2.used_urec, 6 t2.used_ublk 7 from 8 v$session t1, 9
v$transaction t2 10 where 11 t1.saddr = t2.ses_addr; SID USERNAME XIDUSN
USED_UREC USED_UBLK ---------- ------------------------------ ---------- ---------- ----------
304 MBOESING 4 1 1 session2 alter system switch logfile; System altered.
Thread 1 cannot allocate new log, sequence 9289 Private strand flush not
complete Current log# 4 seq# 9288 mem# 0:
/db/u5001/oradata/GREPORADB/redo04a.log Thread 1 advanced to log sequence
9289 (LGWR switch) Current log# 5 seq# 9289 mem# 0:
/db/u5001/oradata/GREPORADB/redo05a.log
Ok! The expected behavior. Now let’s commit the transaction and repeat the process:
106
session1 commit; Commit complete. session2 select 2 t1.sid, 3 t1.username, 4
t2.xidusn, 5 t2.used_urec, 6 t2.used_ublk 7 from 8 v$session t1, 9 v$transaction t2 10
where 11 t1.saddr = t2.ses_addr; no rows selected session2 alter system switch
logfile; System altered.
Thread 1 advanced to log sequence 9290 (LGWR switch) Current log# 6 seq# 9290
mem# 0: /db/u5001/oradata/GREPORADB/redo06a.log
107
TPS Chart on PL/SQL Developer
Hi all,
Since last post, some people asked me about how to make the charts using PL/SQL
Developer. It basically works for every kind of query/data, like MS Excel.
I’d recommend you to use with historic data, setting time as “X” axis.
Here the example for the post Oracle TPS: Evaluating Transaction per Second:
And get:
108
Have a nice day!
Matheus.
109
PL/SQL Developer Taking 100% of Database
CPU
When using PL/SQL Developer (Allround Automations), a internal query is taking a lot
of cpu cycles on database server (100% of a CPU).
Is this your problem? Please check if the query is like this:
It’s caused by the Describe Context Option of Code Assistant. To disable it:
Tools Preferences Code Assistant and disable the “Describe Context” option.
By tool documentation:
“Describe context context to determine if the Code Assistant should describe the
context of the current user, editor and program unit.
The minimum number of characters identified in the context described can be called
before the word of how many characters need to be typed. Note that you can always
manually invoke code assist, even if the characters have not been typed.
Description of standard functions in the case of default, Code Assist will describe the
function of the standard the to_char, add_months. If you are familiar with these
functions, you can disable this option.”
110
I hope it helped you.
See ya!
Matheus.
111
Installing and Configuring ASMLIb on
Oracle Linux 7
Hi all!
For those are familiar with RHEL/OEL 4 and 5, there is some differences to start
ASMLib on OEL 6 and 7.
So, a quick guide to install (done on OEL 7), start and configure:
1. Install the ASMLib kernel module package as root using the following command:
So, you can download rpm libs from here and install via rpm:
112
Nothing happen? Ok, let’s try to start it:
Take a look:
Victory!
Now, let’s configure:
[root@dbsrv01 ~]# oracleasm configure -i Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library driver. The
following questions will determine whether the driver is loaded on boot and what
permissions it will have. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C will abort. Default user to
own the driver interface []: grid Default group to own the driver interface []: oinstall
Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver
configuration: done
113
[root@dbsrv01 ~]# oracleasm createdisk SDD /dev/sdd1 Writing disk header: done
Instantiating disk: done [root@dbsrv01 ~]# oracleasm listdisks SDD
114
ASM: Adding disk “_DROPPED%” FORCE
Ok doke,
First let I make it clear: Adding a disk with force should be avoided, mainly by all the
rebalance involved. The best choice, if you has “time”, is to just put disks online, like:
So, you know your disks by the name pattern (0 are FGMAIN and 1 are FGAUX, the
problematic). You can do something like:
Diskgroup altered.
Diskgroup altered.
115
While waiting the reball, let’s see the disks in DG:
no rows selected
116
DGDATA001 FGMAIN NORMAL
DGDATA002 FGMAIN NORMAL
DGDATA003 FGMAIN NORMAL
OK? Easy!
Matheus.
117
Adding ASM Disks on RHEL Cluster with
Failgroups
# Recognizing as ASMDISK on ASM Libs (ORACLEASM):
118
# Adding Disk on Diskgroup (sqlplus / as sysasm – ASM Instance)
1) Listing Failgroups
Well done!
Matheus.
119
Manually Mounting ACFS
A server rebooted and I needed to remount the ACFS where the Oracle Home is.
About that:
Today’s post: Manually Mounting ACFS
Tomorrow’s Someday’s post: Kludge: Mounting ACFS Thought Shellscript
Day Before Tomorrow’s Another Day’s post: Auto Mounting Cluster Services
Through Oracle Restart
# Starting ACFS
120
Size (MB): 10240
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /oracle/MYDB
Matheus.
121
Kludge: Mounting ACFS Thought Shellscript
Just the script. The history is here .
This is a “workaround” script. As always, is recommended to use Oracle Restart, like I
posted here .
See ya!
Matheus.
122
CRSCTL: AUTO_START of Cluster Services
(ACFS)
As I sad long time ago ( Manually Mounting ACFS )… Here is it:
To set autostart of a resource (in my case an ACFS) by CRSCTL, here the simple
example:
# KB
http://docs.oracle.com/cd/E11882_01/rac.112/e16794/resatt.htm#CWADD91444
Matheus.
123
Changing ACFS mount point
I do checked there’s no good way to change ACFS mounting point on asmca
assistant, so I decided to document how I quickly change ACFS mount point:
• Do bellow:
Maiquel.
124
ORA-27054: NFS file system where the file is
created or resides is not mounted with
correct options
Due to ease in which we can go to the future or return to the past using Goldengate, it
becomes increasingly necessary recover archives from backup, sometimes it is
necessary to recover a several days.
To do it, generally we need large disk space, at this time, starts a searching for
storage disks.
After finding a disk, is need to mount it, i performed with simply mount options in AIX.
After trying to move the first archieve piece, I get the error:
ORA-27054: NFS file system where the file is created or resides is not mounted with
correct options.
Using the table below, just adjust mount point options according to your system:
Dieison.
125
Error: Starting ACFS in RHEL 6 (Can’t exec
“/usr/bin/lsb_release”)
Quick tip:
# Error:
[root@db1gridserver1 bin]# ./acfsload start -s
Can’t exec “/usr/bin/lsb_release”: No such file or directory at
/grid/product/11.2.0/lib/osds_acfslib.pm line 511.
Use of uninitialized value $LSB_RELEASE in split at
/grid/product/11.2.0/lib/osds_acfslib.pm line 516.
# Solution:
[root@db1gridserver1 bin]# yum install redhat-lsb-core-4.0
Note: Bug 17359415 – Linux: Configuring ACFS reports that cannot execute
‘/usr/bin/lsb_release’ (Doc ID 17359415.8)
Matheus.
126
Create SPFILE on ASM from PFILE on
Filesystem
Some basics, right?
Another thing that is not usual and everytime I do, someone be surprised: “shu”
alias for “shutdown”:
The Bourleson Master also wrote about it. Take a look on a better detailed post about
this subject: http://www.dba-oracle.com/concepts/pfile_spfile.htm .
Matheus.
127
ORA-15186: ASMLIB error function
Almost a month away… My bad!
Here I go again, with a quick tip, that a passed today. Our kernel was ‘changed’
without advise and this began to happen:
The solution was is basically update the asmlibs, that is based on kernel version. For
RHEL, the solution is well decribed here:
http://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel6-1940776.html
https://access.redhat.com/solutions/315643
Just to remember: After the kernel change, a relink of your Oracle Home is higly
recommended.
128
Charsets: Single-Byte vs Multibyte
Encoding Scheme Issue
Sad history:
IMP-00019: row rejected due to ORACLE error 12899 IMP-00003: ORACLE error
12899 encountered ORA-12899: value too large for column
"SCHEMA"."TABLE"."COLUMN" (actual: 61, maximum: 60)
Of course, as more specific a charset configuration is, much better for performance
constraints it’ll be (specially for sequencial reads), because the databases needs to
work with less bytes in datasets/datablocks for the same tuples, in a simple way to
explain. Otherside, this is a quite specific configuration. The performance issues are
mostly related to more simple tunings (sql access plan, indexing, statistics or solution
architecture) than this kind of details. But, it’s important to mention if you’re working in
a database that is enough well tuned…
The follow image ilustrates in a simple way the difference of byting used to address
more characters (a characteristic of supersets):
Ok, doke!
And the solution is…
Let’s summarize the problem first: The char (char, varchar) columns uses more
bytes to represent the same characters. So the situations where, in the source, the
column was used by the maximum lengh, it “explodes” the column lengh in the
destination database with a multibyte encoding scheme.
For consideration, I’m not using datapump (expdp/impdp or impdb with networklink)
129
just because it’s a legacy system with long columns. Datapump doesn’t support this
“deprecated” type of data.
So, my solution, for this pontual problem occouring during a migration was to change
the data lengh of the char columns from “byte” to “char”. This way, the used metric is
the charchain rather than bytesize. Here is my “kludge” for you:
select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MODIFY '||COLUMN_NAME||' CH
AR('||data_length||' CHAR );' from dba_tab_cols where DATA_TYPE='CHAR' and
owner='&SCHEMA;' union all select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MO
DIFY '||COLUMN_NAME||' VARCHAR2('||data_length||' CHAR );' from dba_tab_cols
where DATA_TYPE='VARCHAR2' and owner='&SCHEMA;';
And it works!
Hugs and see ya!
Matheus.
130
Date Format in RMAN: Making better!
I know…
The date format on RMAN it’s not good, but it’s to make it better. Take a look:
Matheus.
131
Creating RMAN Backup Catalog
It can soud repetitive, but always good to have notes about
Well done!
Matheus.
132
EXP Missing Tables on 11.2
Made an exp and some table is missing, right? The database is 11.2+? The tables
missing have no rows in source dabase, right? Bingo!
This happen because Oracle implemented a space saving feature on 11.2 called
Deffered Segment Creation.
This feature basically makes that the first segment of a table is only allocated when
the first row is inserted. It was implemented because Oracle realized is not rare to find
databases with lots of tables that haven’t ever had a row.
The situation occurs because the EXP client uses dab_segments as index to
exporting, and, this feature makes that no segment be allocated. For Oracle, it’s not a
problem, considering the use of Datapump (EXPDP/IMPDP).
But (there always exist a “but”), let’s suppose you have to export the file to a different
location not accessible by directory nor has local space, or either, your table has a
long column (yes, it’s deprecated, I know… but let’s suppose this is a legacy
system…). Then, you can do:
Hope It helped.
See ya!
Matheus.
133
DDBoost: sbtbackup:
dd_rman_connect_to_backup_host failed
A common error. It happens when the datadomain host or mtree is unreachable.
For the first situation, contact the OS/Network administrator. Is can be a firewall
limitation, DNS miss (if using DNS hosting) or, in some cases, networks physically
unreachable.
Starting backup at 24-OCT-15 using target database control file instead of recovery
catalog allocated channel: ORA_SBT_TAPE_1 channel ORA_SBT_TAPE_1:
SID=191 instance=almdbdw_1 device type=SBT_TAPE channel ORA_SBT_TAPE_1:
Data Domain Boost API allocated channel: ORA_SBT_TAPE_2 input datafile file
number=00001 name=+DGMYDB/almdbdw/datafile/system.267.849463017 channel
ORA_SBT_TAPE_1: starting piece 1 at 22-JUL-15 RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
=============== RMAN-00571:
===========================================================
RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at
10/24/2015 10:03:50 ORA-19506: failed to create sequential file,
name="a4qcme1l_1_1", parms="" ORA-27028: skgfqcre: sbtbackup returned error
ORA-19511: Error received from media manager layer, error text: sbtbackup:
dd_rman_connect_to_backup_host failed channel ORA_SBT_TAPE_1 disabled,
job failed on it will be run on another channel
Sending user/password to acess data domain as follow and, after that, re-run the your
action.
Hugs!
Matheus.
134
EXP-00079 – Data Protected
A quick one: I began to have this problem on 12c’s backup catalog schemas. The
reason is that by now all information is protected by policies (VPD). The error:
The solution:
Hugs!
Matheus.
135
Backup Not Backuped Archivelogs and
Delete Input
Hi all!
Sometimes you are caught in a situation where your database is not backuping
archivelogs and need to generate a quick backup commands for those are not
backuped yet and deleting it, right?
I saw this situation in this archived discussion at OTN . Unfortunately I couldn’t give
my answer… But it’s how I do:
And be happy!
But an observation! It not works this way for databases with dataguard. For these
cases you’ll need to add “ and name’&dgname’ ” at select where clause…
See ya!
Matheus.
136
How to list all my Oracle Products from
Database park?
This is part of DBA role: know and prospect the use of Oracle Products for Oracle
contract periodical review, isn’t?
It usually represent a huge problem, or, at least, demands a long time to refresh your
spread sheet…
Without further, here’s a query that can map your environment (at least your Oracle
database products):
You can use it to automate a report and set thresholds. Be creative…
PS: From now, I’ll post all in english. Just for fun.
137
f.target_guid and h.target_type in ('oracle_database','rac_database') and s.target_type
= h.target_type and s.snapshot_type in ('oracle_dbconfig','oracle_racconfig') and
f.DETECTED_USAGES0 ) opt where hcd.target_guid=ohs.target_guid and
ohs.host_name=ddi.host_name and ddi.target_guid=opt.target_guid and ( opt.name
like '%Active Data Guard%' -- Active Data Guard or opt.name like '%Advanced
Compression%' -- Advanced Compression or opt.name like '%Audit Vault%' -- Audit
Vault or opt.name like '%Database Vault%' -- DB Vault or opt.name like '%Partitioning
(user)%' -- Partitioning or opt.name like '%Real Application Clusters%' --RAC or
opt.name like '%Real Application Testing%' -- RAT or opt.name like '%ADDM%' --
Diagnostic Pack or opt.name like '%Automatic Database Diagnostic Monitor%' --
Diagnostic Pack or opt.name like '%Automatic Workload Repository%' -- Diagnostic
Pack or opt.name like '%AWR%' -- Diagnostic Pack or opt.name like '%Baseline%' --
Diagnostic Pack or opt.name like '%Diagnostic Pack%' -- Diagnostic Pack or opt.name
like '%SQL Monitoring%' -- Tuning Pack or opt.name like '%SQL Performance%' --
Tuning Pack or opt.name like '%SQL Profile%' -- Tuning Pack or opt.name like '%SQL
Tuning%' -- Tuning Pack or opt.name like '%SQL Access%' -- Tuning Pack or
opt.name like '%Tuning Pack%' -- Tuning Pack or opt.name like '%Change
Management Pack%' -- Change Management Pack or ddi.edition like 'Enterprise
Edition') order by ddi.host_name;
Matheus.
138
How to list all my Oracle Products from
Application park?
YES!
I knew you would like the last post!
Here is a query to list your Oracle Application Products (including Oracle SOA Suite,
of course) from OEM.
Use wisely:
139
Server 10g%' or LBL_PRODUCTNAME like '%Application Server Infrastructure 10g%'
or LBL_PRODUCTNAME like '%Business Intelligence%' or LBL_PRODUCTNAME
like '%Oracle SOA Suite%' or LBL_PRODUCTNAME like '%Oracle BAM%' or
LBL_PRODUCTNAME like '%WebCenter Portal Suite 11g' or LBL_PRODUCTNAME
like '%Oracle Business Process Management%' or LBL_PRODUCTNAME
like '%Application Server Configuration%' or LBL_PRODUCTNAME like '%Oracle
Application Server Guard%' or LBL_PRODUCTNAME like '%Oracle Remote Intradoc
Client%' ) order by "Produto");
Matheus.
140
Service Detected on OEM but not in SRVCTL
or SERVICE_NAMES Parameter?
Okey, it happens.
To me, after a database moving from a cluster to another. The service was registered
by SRVCTL in the old cluster but is not needed. So, was not registered in the new
cluster.
But OEM insists to list, for example, the “service3” as offline. The problem is that you
can not remove it by SRVCTL, because you had not registered, right? See the
example below:
Listing services:
srvdatabase1:/home/oraclesqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 8 15:21:00 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options
SQL show parameters service;
NAME TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
service_names string
service2,test,systemdb
Matheus.
141
Manipulating JMS queues using WLST
Script
Hi.
Today, let’s talk about Java Message Systems (JMS), the reason led me talk about
this, is that my environment, a complex architecture of messages where we have
more of two hundred queues in the same domain.
The administration of queues in the weblogic console is very simple, but, if you need
to remove a million messages, in a hundred queues, you have a problem!
To turn more agile the visualization of messages, state and other queue properties,
nothing better than to use WLST.
This post shows a script, which can grow up where you imagine, for while the script
have just three options (the most useful to me) and nothing prevents to have more.
1 – Pause consumer
2 – Resume consumer
3 – Delete messages
You just need to edit the script to add user, password and admin console url.
# @author Dieison Larri santos # 30/04/2016 print " What do you need?" print " " print
"1 - Pause Consumer" print "2 - Resume Consumer" print "3 - Delete Messages" task
= raw_input("choose an option: ") task = int(task)
connect('username','passsword','t3://admin_console.net:7001') servers =
domainRuntimeService.getServerRuntimes(); if (len(servers) 0): for server in servers:
jmsRuntime = server.getJMSRuntime(); jmsServers = jmsRuntime.getJMSServers();
for jmsServer in jmsServers: destinations = jmsServer.getDestinations(); for
destination in destinations: pen = destination.getMessagesPendingCount(); cur =
destination.getMessagesCurrentCount(); sum = pen + cur;
print 'Name: '+destination.getName(),'; Messages Count:',sum,';
Paused: ',destination.isPaused() if task == 3: destination.pauseConsumption(); if task
== 2: destination.resumeConsumption(); if task == 3: destination.deleteMessages('');
disconnect()
To execute: $WL_HOME/common/bin/wlst.sh script_name.py.
Dieison.
142
Decrypting WebLogic Datasource Password
Hi Guys,
Today I bring you a script that I use to decrypt datasource passwords and also the
password of AdminServer, which is very useful on a daily basis.
The script uses the encrypted password that is found within the datasource
configuration files ($DOMAIN_HOME/config/jdbc/*.xml).
To decrypt the AdminServer password is used the encrypted password contained
within the boot.properties ($DOMAIN_HOME/servers/AdminServer/security).
#================================================================
======================= # This Script decrypt WebLogic passwords # # Usage:
# wlst decryptPassword.py # # #========================================
=============================================== import os import
weblogic.security.internal.SerializedSystemIni import
weblogic.security.internal.encryption.ClearOrEncryptedService def
decrypt(domainHomeName, encryptedPwd): domainHomeAbsolutePath =
os.path.abspath(domainHomeName) encryptionService = weblogic.security.internal.S
erializedSystemIni.getEncryptionService(domainHomeAbsolutePath) ces =
weblogic.security.internal.encryption.ClearOrEncryptedService(encryptionService)
clear = ces.decrypt(encryptedPwd) print "RESULT:" + clear try: if len(sys.argv) == 3:
decrypt(sys.argv[1], sys.argv[2]) else: print "INVALID ARGUMENTS" print " Usage:
java weblogic.WLST decryptPassword.py " print " Example:" print " java
weblogic.WLST decryptPassword.py
D:/Oracle/Middleware/user_projects/domains/base_domain
{AES}819R5h3JUS9fAcPmF58p9Wb3swTJxFl0t8NInD/ykkE=" except: print
"Unexpected error: ", sys.exc_info()[0] dumpStack() raise
For example:
[oracle@app1osbgrepora1l scripts]$ source
/oracle/domains/osb_domain/bin/setDomainEnv.sh
[oracle@app1osbgrepora1l osb_domain]$ java weblogic.WLST decryptPassword.py
/oracle/domains/osb_domain/
{AES}WdbfYhD1EbVXmIe62hLftef4WtNPvyRDGc1/lsyQ014=
Initializing WebLogic Scripting Tool (WLST) …
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
RESULT :OSBPASS123
143
That’s all for today
Jackson.
144
Setting up a weblogic Result cache on
Oracle Service Bus
Hi Guys,
In the current days , even with the new ideals about agile metods and various
attempts to put together infraestructure and development (DevOps) we still have so
much codes that had the development with a great distance of the machines and S.O.
In this scenario, a lot of exceptions are found in the application logs, but the majority
can’t be considerated the problem in fact.
For this lab, will used two machines and two managed servers on a cluster.
First, let’s create two coherence servers, one for each machine:
For each coherence server we must set one lib and one module in the classpath, this
box is found in “Start Server” page on the Coherence Server.
145
In the same page, we need to configure the box “Arguments:” to define coherence
hosts and ports.
Attention to fill properly ‘localhost’: For the coherence server1 the localhost value is
machine01, to the coherence server2 the value for localhost is machine2.
After settting up the two coherence server, let’s create a coherence cluster, the target
must to be the managed servers or Weblogic server Cluster:
After setting up coherence cluster, set the new cluster on each coherence Server.
146
The last step is to configure coherece Server parameter on each managed server. In
Box “Arguments”, which is on page “Start server” on each managed server.
Again, attention to fill properly ‘localhost’: For the Managed server1 the localhost value
is machine01, to the Managed server2 the localhost value is machine2.
To validate the settings , start the coherence servers, and wait to RUNNING status.
Celebrate with a good wine!
Dieison.
147
Avoiding lost messages in JDBC Persistent
Store, when processing Global Transactions
with JMS.
A few months ago, i had a problem in Persistence Store of JMS queues, soon after
perform server restart, I get error from persistence store to recover message:
To resolve this problem, just add this parameter on server startup arguments:
-Dweblogic.store.StoreBootOnError = true
With this parameter, the server starts with OK status in WebLogic 11g and with
FAILED status in Weblogic 12c, but in both the processing of the messages continues
when active,
to remove FAILED status in Weblogic 12c, just need to truncate persistence table in
database and restart server (This solution can be found in Oracle Docs).
This solution did not solved my problem, because I can’t lost or delete messages.
If I perform server start with paramenter mendioned above, I get this error:
After analyse the two behaviors, and pay special attention to this error: (Ignoring 2PC
record for sequence…) . I went to invetigate what is the better configuration to use
JMS with Global transactions, because I always a doubt of why Datasource of
persistence is non-XA, what is behavior of global transactions in this case? And then, I
found about LLR Optimization (Logging Last Resource).
The use of this configuration, explains why do can not uses Driver XA for JDBC
persistence Store.
148
The information below can be found HERE .
At server boot or data source deployment, LLR data sources load or create a table on
the database from which the data source pools database connections. The table is
created in the schema determined by the user specified to create database
connections. If the database table cannot be created or loaded, then server boot will
fail.
Within a global transaction, the first connection obtained from an LLR data source
reserves an internal JDBC connection that is dedicated to the transaction. The internal
JDBC connection is reserved on the specific server that is also the transactions’
coordinator. All subsequent transaction operations on any connections obtained from
a same-named data source on any server are routed to this same single internal
JDBC connection.
When an LLR transaction is committed, the WebLogic Server transaction manager
handles the processing transparently. From an application perspective, the transaction
semantics remain the same, but from an internal perspective, the transaction is
handled differently than standard XA transactions. When the application commits the
global transaction, the WebLogic Server transaction manager atomically commits the
local transaction on the LLR connection before committing transaction work on any
other transaction participants. For a two-phase commit transaction, the transaction
manager also writes a 2PC record on the database as part of the same local
transaction. After the local transaction completes successfully, the transaction
manager calls commit on all other global transaction participants. After all other
transaction participants complete the commit phase, the related LLR 2PC transaction
record is freed for deletion. The transaction manager will lazily delete the transaction
record after a short interval or with another local transaction.
If the application rolls back the global transaction or the transaction times out,
the transaction manager rolls back the work in the local transaction and does
not store a 2PC record in the database.
Firt Step:
149
Create a Datasouce to persistence Store with Non-XA Driver.
Second Step:
Go to the transaction page of the new Datasource, and select these check boxes as
below:
Restart the server and hope to never lose messages in persistence again!
Dieison.
150
Reset the AdminServer Password in
WebLogic 11g and 12c
Reset the AdminServer Password in WebLogic 11g and 12c:
source $DOMAIN_HOME/bin/setDomainEnv.sh
cd $DOMAIN_HOME/servers/AdminServer/
mv data data-old
cd $DOMAIN_HOME/security
java weblogic.security.utils.AdminAccount weblogic .
OBS: Check the post on decrypt datasource password , which can also be used to
decrypt the credentials of boot.properties file, avoiding making the above procedure, if
this file exists.
151
Configuration Coherence Server
Out-of-Process in OSB 12C
Hello guys,
From
To
osb-coherence-cache-config.xml
osb-coherence-override.xml
1 – Create the managed servers for coherence and also the cluster for these
managed servers:
3 – Add the cluster created for the managed servers of coherence to the targets of the
“Coherence cluster” automatically created in the default installation
(defaultCoherenceCluster):
152
4 – Restart the coherence’s managed servers;
153
6 – Add to the arguments in server start of the managed servers of OSB:
“-DOSB.coherence.cluster=CoherenceCluster
-Dtangosol.coherence.distributed.localstorage=false”;
154
WebLogic AdminServer Startup stopped at
“Initializing self-tuning thread pool”
After starting AdminServer, it remains with starting status and stopped writing in log
file in:
Check the disk space used, to make sure that there are no partitions with 100%
utilization, including /tmp.
After them, make sure the owner of the weblogic (oracle) has have write permission of
“/tmp”
If the owner of weblogic does not have write permission must be set, because the
application server writes some temporary files in the directory:
Jackson.
155
Weblogic starting with the operating system
Hi,
Today, let’s to configure weblogic services startup, when machines starts.
In some blogs, we can find a bunch of customized scripts that create and set variables
to startup the adminservers, nodemanagers and managed server, but, in my case, i
just need to start adminserver and nodemanger, when machines start just after an
incident.
For this situation, we need that the startup of application do not interrupt the operation
system startup.
Without create scripts or complex configurations, to obtain this behavior we just need
add startup of services in the file /etc/rc.local.
When you uses “su – oracle -c” the operation system makes a call to oracle user.
Using rc.local, the last OS execution file after startup, you guarantee to not interrupt
system startup.
Enjoy.
Dieison.
156
WLST easeSyntax
Who works with WLST know it’s pretty boring to natigate to MBeans, because
whenever necessary to put in parentheses () commands and quotation marks ‘ ‘.
When we forget, need to retype the whole command again.
I found a command that helps a lot when it comes to navigate in MBean tree, it
eliminates the need for parentheses and quotation marks.
After entering the WLST, type:
wls:/xpto_domain/serverConfig easeSyntax()
wls:/xpto_domain/serverConfig ls
dr– AdminConsole
…
dr– SelfTuning
dr– Servers
dr– ShutdownClasses
dr– SingletonServices
wls:/xpto_domain/serverConfig cd Servers
wls:/xpto_domain/serverConfig/Servers ls
dr– AdminServer
dr– WLS1_MSWS1
dr– WLS1_MSWS2
wls:/xpto_domain/serverConfig/Servers cd WLS1_MSWS1
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1 cd Log
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1/Log cd ..
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1 cd Machine
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1/Machine ls
dr– app1wsmachine1
Not tested within python scripts, only browsing the tree Mbean.
Jackson.
157
Quickly change Weblogic to Production
Mode
You were running away to deploy your newest project on Weblogic 12c and
lately discover that you made your environment as development mode (OPSSSS =/)
Maiquel.
158
Weblogic in debug mode
Usually, in non-production environments, it is necessary to check applications
deployed on a Weblogic server. The default log (.out) does not report or details
conclusively the real cause of the problem.
In this case, beyond the levels of logs that can be configured via weblogic console
(Managed Server Logging Advanced), we can add to the JVM startup arguments
(Managed Server Configuration Server Start Arguments) the following arguments:
-Dweblogic.webservice.verbose=true -Dweblogic.wsee.verbose=*
-Dweblogic.wsee.verbose=weblogic.wsee.* -Dweblogic.wsee.verbose.timestamp=true
Recommended use only during the troubleshoot, because it generates a lot of logs.
Jackson.
159
Apache 2.4 with port redirect to Weblogic
12c
According Oracle guys, Apache 2.4 its is a vanila module to Weblogic 12c and same
module runs with Weblogic 11g.
# httpd -version Server version: Apache/2.4.6 (Red Hat Enterprise Linux) Server built:
Mar 21 2016 02:33:00
So after config and restart httpd, start get error on system messages caused by
mod_wl_24.so:
edit /etc/ld.so.conf
Maiquel.
160
Oracle Licensing: Weblogic Tip!
Like a complement to yesterday post , about Oracle Database Licensing, today’s post
is a little tip to Weblogic licensing evaluating, considering an auditing…
There is also a py script on Orace Support that can be executed via WLST on Admin
Sever. Please take a look on: WebLogic Server Basic License Feature Usage
Measurement Script (Doc ID 885587.1)
Hugs!
Matheus.
161
Weblogic JRF files in /tmp
Problem:
In weblogic 11G, there are several JFR files in /tmp directory:
These files are from DMS (Dynamic Monitoring Service) and they are created when
application server is running.
By default, these files are generated in this directory and is not possible to turn it off.
As a workaround, you can redirect where these files will be generated by the
parameter “-XX:FlightRecorderOptions=repository”.
For example: -XX:FlightRecorderOptions=repository=/oracle/tmp/
162
Restart the servers.
Jackson.
163
Bypass user and password in the Oracle
BAM ICommand.
Every time you need to execute ICommand, you must enter the user and password of
the application server running Oracle BAM.
With the configuration below, it is no longer necessary to inform username and
password every time.
weblogic YOUR_PASSWORD
Jackson.
164
<EJB Exception in method: ejbPostCreate:
java.sql.SQLException: XA error:
XAResource.XAER_RMFAIL start() failed on
resource 'ggds-datasource_domain':
XAER_RMFAIL : Resource manager is
unavailable
Some incidents that we face are expected. Usually we wait for problems when
something changes in an environment.
But, some times, for no apparent reason, with no systemic alteration, we encounter
errors where our first reaction is: what a f ***!?
This time we find a java exception in a standard domain for GoldenGate Director;
For months the application behaved stable and functional, until it did fails for no
apparent reason;
In version 12.1, this bug is fixed, but as a palliative solution can do the following:
165
The problem was solved, at least for now.
Dieison.
166
Error BAD_CERTIFICATE in Node Manager
Error:
Solution:
source $DOMAIN_HOME/bin/setDomainEnv.sh .
$WL_HOME/server/bin/setWLSEnv.sh java utils.CertGen -cn `hostname` -keyfilepass
DemoIdentityPassPhrase -certfile mycert -keyfile mykey java utils.ImportPrivateKey
-keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -keyfile
mykey.pem -keyfilepass DemoIdentityPassPhrase -certfile mycert.pem -alias
demoidentity cp DemoIdentity.jks $WL_HOME/server/lib
167
$WL_HOME/common/bin/wlst.sh
connect('weblogic','password','t3://app1osbxpto1.localhost.net:7001') nmEnroll('/oracle
/domains/osb_domain','/oracle/binaries/wlserver_10.3/common/nodemanager/') exit()
Restart node manager.
Jackson.
168
Weblogic – Wrong listening address
This week, had a unexpected stop on Weblogic server after start this server it played a
trick, it turns to refuse any telnet request on managed server port even localhost,
however it was started successfully.
Check this:
The server instance for which you configure the listen address does not need to be
running.
• If you have not already done so, in the Change Center of the Administration
Console, click Lock & Edit (see Use the Change Center ).
• In the left pane of the Console, expand Environment and select Servers .
• Click Save .
On Administration console:
169
Maiquel.
170
Enabling GoldenGate 12c DDL replication
For some IT demands it’s necessary to replicate DDL’s (Data Definition) to maintain
source/target equalized.
SQL @role_setup.sql GGS Role setup script Enter GoldenGate schema name:
ggate Wrote file role_setup_set.txt PL/SQL procedure successfully completed. Role
setup script complete Grant this role to each user assigned to the Extract, GGSCI, and
Manager processes, by using the following SQL command: GRANT
GGS_GGSUSER_ROLE TO where is the user assigned to the GoldenGate
processes. SQL GRANT GGS_GGSUSER_ROLE TO ggate; Grant succeeded. SQL
@ddl_enable.sql Trigger altered.
Sample:
Maiquel.
171
How to find GoldenGate recovery time
Sometimes it’s necessary to restart GoldenGate process, and after start GG Extract, it
take’s long time ‘in recovery’ status.
It’ a interesting subject, and can be found here (before read below ) .
GGSCI (greporagg) 16 send EXT status EXTRACT EXT (PID 23068830) Current
status: In recovery[1]: Processing data Current read position: Redo thread #: 2
Sequence #: 4246 RBA: 223285824 Timestamp: 2016-10-08 07:32:36.000000 SCN:
1658.1839128718 Current write position: Sequence #: 29295 RBA: 74336127
Timestamp: 2016-10-14 17:59:43.476624 Extract Trail: ./dirdat/TR
Maiquel.
172
GoldenGate Integrated Capture and
Integrated Replicat Healthcheck Script
GoldenGate integrated Extract gives to dbas powerful tool to check GoldenGate’s
operation in database, this package can be found to download on Doc ID 1448324.1.
This Healthcheck is similar AWR reports and it been very useful to find some error or
bottleneck.
Environment overview:
Performance tips:
This HC uses system views created by OGG, so you can customize you own HC
Maiquel.
173
GoldenGate: RAC One Node Archivelog
Missing
The situation:
We have a GoldenGate on Allow Mode running some extracts on RAC One Node
Database (reading the archivelogs). And then, suddenly, the instance crashes
(network lost contact to the server) and the other instance (thread) was auto started by
CRS. To the database no problems: The other node redologs was used during the
startup recover and every thing is ok.
The application running with Weblogic serverpool and gridlink just had a little
contention and continued the operation thought the started instance. The Goldengate
switch was manually made, but some sequences was lost. What we found? the
sequences was in the old thread’s redologfiles. It should be backed up
if fast_start_mttr_target was different to zero. Buuut, the world is not so beautiful:
How we solved?
Simple solution: identified the group/thread and made a cp from ASM. The copied
redolog was used as archivelog on goldengate and everything was ok.
Matheus.
174
GoldenGate GGSCI> shortcut tips
GGSCI (GoldenGate Software Command Interface) has some interesting shortcuts,
quick, and good to use day-to-day GoldenGate.
My preferred:
GGSCI (grepora) 4 h GGSCI Command History 1: info all 2: shell tail -f ggserr.log 3:
edit params extr 4: h
RegEx:
See you!
Maiquel.
175
Skipping database transaction on Oracle
GoldenGate
Sometimes GoldenGate EXTRACT capture long transactions from database and
could be some B.O.F.H making DUMMY, if it’s the case, it’s a ‘UNWANTED’
transaction, and can skip it on ggsci:
Maiquel.
176
GoldenGate: Replicate data from SQLServer
to TERADATA – Part 1
Since we are arriving at the end of the year, I have taken the mission to replicate data
between SQL server and TERADATA. The worst part in this task, is to install and
configure a Goldengante in a Windows environment.
After installing the GG binaries, it is good practice to add the MGR as a Windows
service:
In order for GG to access the sql database, you need to create a data source (ODBC),
and configure a new system DSN (here is db0sql1), and select SQL Server as the
database driver.
To perform a DBLOGIN:
Dieison.
177
GoldenGate: Replicate data from SQLServer
to TERADATA – Part 2
This steps should still be performed in SQLserver Host:
The pump process configuration is very simple, its only function is to transport the trail
files to destination.
Still in the SQLserver Host, is need to create a definition file, wich will be used in
gg-teradata.
First, create a “tables.def” file that should contain a dblogin and tables that will be
replicated.
This process result a new file (tables_sqlserver.sql), copy this file to destination
(gg-teradata).
To configure the goldengate teradata you must install TERDATA ODBC Driver, to
allow the goldengate access the teradata base, you can download ODBC driver here .
After install ODBC Driver is need to adjust the odbc.ini, which should contain teradata
connection information.
[teradata_dev] Driver=/opt/teradata/client/ODBC/lib/tdata.so
Description=Teradata base DBCName=teradata1.net LastUser=
Username=GG_TERA Password=???????? Database= DefaultDatabase=dbs
LoginTimeout=3600 SessionMode=ANSI DateTimeFormat=AAA NoScan=Yes
characterSet=UTF16
178
After configuring odbc.ini, add an environment variable in S.O, making the file visible
to goldengate.
export ODBCINI=$GGATE_HOME/.odbc.ini
*You can add this “export” on oracle user profile, if its no set, goldengate will fail.
replicat R_MSQL --This information comes from odbc.ini file targetdb teradata_dev
SOURCECHARSET PASSTHRU discardfile ./dirrpt/R_MSQL.dsc, MEGABYTES
1024, purge sourcedefs ./dirdef/tables_sqlserver.sql --Map MAP
dbo.DLOG_ERRORS, TARGET T_DB1_SAC_V.VW_DLOG_ERRORS; MAP
dbo.SAC_DATA, TARGET T_DB1_SAC_V.VW_SAC_DATA; MAP dbo.SAC_LIST,
TARGET T_DB1_SAC_V.VW_SAC_LIST; MAP dbo.SAC_TITLE, TARGET
T_DB1_SAC_V.VW_SAC_TITLE;
This is a simple example of replication between SQL server and Teradata, a bunch of
customizations can be performed depending on the business need.
Enjoy.
Dieison.
179
Access denied on GoldenGate Manager
After apply GoldenGate fix 12.1.2.1.10 on GoldenGate for Oracle Databases 11G
getting error below during GoldenGate Director Server access:
Maiquel.
180
GoldenGate – exclude Oracle database
thread#
Your Oracle database instance status changed , so you need to dismiss some thread#
on GoldenGate.
Maiquel.
181
GoldenGate 12.1.2 not firing insert trigger
I had to troubleshoot a situation, after GoldenGate capture some DML and replicate
that, Oracle database needs to run insert trigger making some business integration.
SUPPRESSTRIGGERS | NOSUPPRESSTRIGGERS
Valid for nonintegrated Replicat for Oracle. Controls whether or not triggers are fired
during the Replicat session. Provides an alternative to manually disabling triggers.
(Integrated Replicat does not require disabling of triggers on the target system.)
SUPPRESSTRIGGERS
is the default and prevents triggers from firing on target objects that are configured for
replication with Oracle GoldenGate.
SUPPRESSTRIGGERS
SUPPRESSTRIGGERS
Regards!
Maiquel.
182
How to sincronize high data volume with
GoldenGate
I was taking high workload with data load methods, so I decided to move out of
comfort zone and fortunately discovered a excellent way to copy/move high data
volume with GoldenGate Initial Load.
Tested this feature with source GG 12.2 and target GG 12.1, so it were necessary to
specify ” FORMAT LEVEL 4″ on rmthost line.
This feature worked very well, and wasn’t necessary to create db links/bulk batch or
technical WA.
183
How to sincronize high data volume with
GoldenGate – Part II
In the latest post , I documented how to copy/move high table data volume
using GoldenGate Initial Load (with SPECIALRUN option).
Sometimes, we (dba/sysadmins) need to move HIGH data (tables with billion rows),
in shortest time possible.
According Oracle:
The following are suggestions that can make the load go faster and help you to avoid
errors.
Data: Make certain that the target tables are empty. Otherwise, there may be
duplicate-row errors or conflicts between existing rows and rows that are being
loaded.
184
Constraints: Disable foreign-key constraints and check constraints. Foreign-key
constraints can cause errors, and check constraints can slow down the loading
process. Constraints can be reactivated after the load concludes successfully.
Indexes: Remove indexes from the target tables. Indexes are not necessary for
inserts. They will slow down the loading process significantly. For each row that is
inserted into a table, the database will update every index on that table. You can add
back the indexes after the load is finished.
Note:
Shazam! \o/
Maiquel.
185
Failure unregister integrated extract
Some times it’s impossible to unregister Integrated Extract, however it need to exclude
to avoid RMAN failures.
Try it:
Maiquel.
186
Auto start GoldenGate
How to autostart GoldenGate services after system startup?
On Linux: /etc/rc.local
Maiquel.
187
Quick find ODI repository version
How to check which ODI repository component/version is created?
HOT SQL:
Maiquel.
188
ODI 10gR1: Connection to Repository Failed
after Database Descriptor Change
After migrating database to another host, port or SID, the error below started to
happen when running a scenario.
“Of course” all mapped connection was right setted on Topology… But environment is
complex, it’s possible something is missing…
To fix:
So, to check detailed Topology Connection information, as posted here , you chan
check this:
189
SNP_CONTEXT.CONTEXT_NAME AS CONTEXT_NAME,
SNP_LSCHEMA.LSCHEMA_NAME AS LOGICAL_SCHEMA,
SNP_CONNECT.JAVA_DRIVER AS DRIVER_INFO, SNP_MTXT_PART.TXT
AS URL FROM SNP_TECHNO LEFT OUTER JOIN SNP_CONNECT ON
SNP_CONNECT.I_TECHNO=SNP_TECHNO.I_TECHNO LEFT OUTER JOIN
SNP_PSCHEMA ON SNP_PSCHEMA.I_CONNECT=SNP_CONNECT.I_CONNECT
LEFT OUTER JOIN SNP_PSCHEMA_CONT ON
SNP_PSCHEMA_CONT.I_PSCHEMA=SNP_PSCHEMA.I_PSCHEMA LEFT OUTER
JOIN SNP_LSCHEMA ON
SNP_LSCHEMA.I_LSCHEMA=SNP_PSCHEMA_CONT.I_LSCHEMA LEFT OUTER
JOIN SNP_CONTEXT ON
SNP_CONTEXT.I_CONTEXT=SNP_PSCHEMA_CONT.I_CONTEXT LEFT OUTER
JOIN SNP_MTXT_PART ON
SNP_MTXT_PART.I_TXT=SNP_CONNECT.I_TXT_JAVA_URL WHERE
SNP_CONNECT.CON_NAME IS NOT NULL ORDER BY
SNP_TECHNO.TECHNO_NAME;
190
Failure to create ODI schedule
Hi,
Today, as in another normal days, I found a problem with ODI schedule in the newly
created enviroment. While creating a schedule to scenario execution, then clicked in
update schedule in Topology Agents OracleDIAgent , I received an exception:
ODI-1274: Agent Exception Caused by: Could not find the AgentScheduler instance in
order to process 'OdiComputePlanning' request
Oracle support has a solution for this exception, but only for ODI 12c, it happens that
my enviroment is ODI 11.1.1.6, in community oracle, has the same question, but,
without answer.
I could not find any solution, then after cry a lot and performing restart all
(Adminserver, managedserver and nodemanager), I saw another error when starting
nodemanager:
To solve it, I found only methods to bypass the problem, but no one says how can I
solve it. To bypass, just change parameter NativeVersionEnabled to false in
$BEA_HOME/common/nodemanager/nodemanager.properties ,
this will solve the problem with nodemanader, but will not solve the problem with ODI
schedule.
To solve the two exceptions (nodemanger and ODI schedule) keep the nodemanager
parameter NativeVersionEnabled=true and set LD_LIBRARY_PATH in
$domain_home/bin/setDomainEnv.sh as below:
LD_LIBRARY_PATH=$BEA_HOME/server/native/linux/x86_64/
If this procedure helped you to solve the problem, or not, send us your comments!
Dieison.
191
ODI – Import(ANT) Modes
Oracle introduce in Data Integrator 12c an spectacular way to avoid object duplication
(10g/11g users will bad remember)
With “Global ID” , ODI repository will generate special HASH to each object created
on the repository (sometimes it will be updated).
According oracle docs , “read carefully this section in order to determine the import
mode you need.”
Import Mode
Description
Duplication
This mode creates a new object (with a new internal ID) in the target Repository, and
inserts all the elements of the export file. The ID of this new object will be based on
the ID of the Repository in which it is to be created (the target
Repository).Dependencies between objects which are included into the export such as
parent/child relationships are recalculated to match the new parent IDs. References to
objects which are not included into the export are not recalculated.
The Duplication mode is used to duplicate an object into the target repository. To
transfer objects from one repository to another, with the possibility to ship new
versions of these objects, or to make updates, it is better to use the three Synonym
modes.
This import mode is not available for importing master repositories. Creating a new
master repository using the export of an existing one is performed using the master
repository Import wizard.
192
Tries to insert the same object (with the same internal ID) into the target repository.
The original object ID is preserved.If an object of the same type with the same internal
ID already exists then nothing is inserted.
Dependencies between objects which are included into the export such as parent/child
relationships are preserved. References to objects which are not included into the
export are not recalculated.
If any of the incoming attributes violates any referential constraints, the import
operation is aborted and an error message is thrown.
Tries to modify the same object (with the same internal ID) in the repository.This
import mode updates the objects already existing in the target Repository with the
content of the export file.
Note that this mode is able to delete information in the target object if this information
does not exist in the export file.
This import mode does NOT create objects that do not exist in the target. It only
updates existing objects. For example, if the target repository contains a project with
no variables and you want to replace it with one that contains variables, this mode will
update the project name for example but will not create the variables under this
project. The Synonym Mode INSERT_UPDATE should be used for this purpose.
If no ODI object exists in the target Repository with an identical ID, this import mode
will create a new object with the content of the export file. Already existing objects
(with an identical ID) will be updated; the new ones, inserted.Existing child objects will
be updated, non-existing child objects will be inserted, and child objects existing in the
repository but not in the export file will be deleted.
Dependencies between objects which are included into the export such as parent/child
relationships are preserved. References to objects which are not included into the
export are not recalculated.
This import mode is not recommended when the export was done without the child
components. This will delete all sub-components of the existing object.
Import Replace
This import mode replaces an already existing object in the target repository by one
object of the same object type specified in the import file.This import mode is only
supported for scenarios, Knowledge Modules, actions, and action groups and replaces
all children objects with the children objects from the imported object.
193
Note the following when using the Import Replace mode:
If your object was currently used by another ODI component like for example a KM
used by an integration interface, this relationship will not be impacted by the import,
the interfaces will automatically use this new KM in the project.
Warnings:
• When replacing a Knowledge module by another one, Oracle Data Integrator sets
the options in the new module using option name matching with the old module’s
options. New options are set to the default value. It is advised to check the values
of these options in the interfaces.
• Replacing a KM by another one may lead to issues if the KMs are radically
different. It is advised to check the interface’s design and execution with the new
KM.
Se you!
Maiquel.
194
GoldenGate supplemental log check
Are you bored with GoldenGate objects with no supplemental log on Oracle
Database?
This script will check ALL tables in GG PRM, after check on database supplemental
log information.
195
OGG-01224 Oracle GoldenGate Command
Interpreter for Oracle: Bad file number
I checked strange coincidence during GoldenGate Director monitoring failure and
GoldenGate Manager messages.
It good idea stop manager proccess (if it’s possible) before truncate log file.
196
ERROR OGG-02636 when creating a
integrated extract in Goldengate 12C on a
Puggable database 12C
While creating an integrated extract in Goldengate 12C on a Puggable database 12C I
came across the follow error, stating that the needed catalog name is mandatory and
was not being informed.
ERROR OGG-02636 Oracle GoldenGate Capture for Oracle, ext1.prm: The TABLE
specification ‘TABLE table_name’ for the source table ‘table_name’ does not include a
catalog name. The database requires a catalog name.
There is two ways to solve this case: The first , besides less indicated, is to add the
name of pluggable database (Catalog) before the owner name on the table maps, for
example:
–Tables
TABLE PDB_NAME .SCHEMA_OWNER.TABLE_NAME;
Not really enjoying this solution and after searching for long hours without any other
result , our friend Maiquel DC indicated a parameter that identifies the catalog name
for all tables in the extract ;
–Parameters
SOURCECATALOG PDB_NAME
197
OGG-0352: Invalid character for character
set UTF-8 was found while performing
character validation of source column
Almost a month without post!
My bad, december is always a crazy time to DBAs, right?
This post’s title error happens because the charset is different between databases
used on replication thought GoldenGate and occurs only with alphanumerical columns
(CHAR, VARCHAR, VARCHAR2), because, even the char length be the same, the
data length will be different (like I explained here ). Take a look in this example:
I usually prefer the second option, just because it’s less intrusive than number 1.
See ya!
Matheus.
198
OGG-01934 Datastore repair failed,
OGG-01931Datastore ‘dirbdb’ cannot be
opened
After change GoldenGate 12c to ACFS filesystem, got eternal WARNING
OGG-01931, even Datastore is created:
WARNING OGG-01931 Oracle GoldenGate Manager for Oracle, mgr.prm: Datastore
‘dirbdb’ cannot be opened. Error 2 (No such file or directory).
Maiquel.
199
ERROR OGG-00446 – Unable to lock file “*”
(error 11, Resource temporarily unavailable).
GoldenGate 12c was running over NFS filesystem and had unexpected stop then
when it try starts take OGG-00446.
C’est La Vie!
Maiquel.
200
Error OGG-00354 Invalid BEFORE
column:(column_name)
When we use extraction process with certain macro filters, and send the trails to a
goldengate with JAVA adapter, the java extract process fails with the following error:
OGG-00354 Invalid BEFORE column:(column_name).
EXTRACT PROCESS
PUMP PROCESS
EXTRACT GG JAVA
In some cases, this issue can be resolved just removing the clause
“GETUPDATEBEFORES”, as reported in the Oracle note (Doc ID 2151605.1) . But in
some environments this procedure not resolve, because it is an undocumented bug in
goldengate JAVA 11.1, which is caused by the use of format release 11.1.
This same process has been testing in goldengate 12.1, with format release 12.1, and
the problem not occurs.
201
Export/Backup directly to Zip using MKNOD!
We all faced that situation when we have to make a logical backup/export and haven’t
so much area to do that, right?
We know the export usually compress a lot on zip/gzip… It wouldn’t be great if we can
export directly to compressed file?
This situation become much more common because of Datapump, that requires a
directory accessible by database server. If you have not possibility to make a
mounting point or any other area, this can help…
Hugs!
Matheus.
202
“tail -f” vs “tail -F”: Do you know the
difference?
Hi all!
Do you know the difference between “tail -f” and “tail -F”?
Ok, don’t feel bad. It’s very difficult to find someone who knows… And with a reason, I
can’t find any link explaining this by Googling.
It’s possible that I don’t know how to search it too. But I searched as I’d search if I
didn’t know that… And couldn’t find anything about…
[root@mbdbasrvr]# tail --help Mandatory arguments to long options are mandatory for
short options too. --retry keep trying to open a file even if it is inaccessible
when tail starts or if it becomes inaccessible later; useful when following by name, i.e.,
with --follow=name -f, --follow[={name|descriptor}] output appended data as the file
grows; -f, --follow, and --follow=descriptor are equivalent -F same as
--follow=name --retry -n, --lines=N output the last N lines, instead of the last
10 --max-unchanged-stats=N with --follow=name, reopen a FILE which has not
changed size after N (default 5) iterations to see if it has been unlinked or renamed
(this is the usual case of rotated log files) If the first character of N (the number of
bytes or lines) is a `+', print beginning with the Nth item from the start of each file,
otherwise, print the last N items in the file. N may have a multiplier suffix: b 512, k
1024, m 1024*1024. With --follow (-f), tail defaults to following the file descriptor, which
means that even if a tail'ed file is renamed, tail will continue to track its end. This
default behavior is not desirable when you really want to track the actual name of the
file, not the file descriptor (e.g., log rotation). Use --follow=name in that case. That
203
causes tail to track the named file by reopening it periodically to see if it has been
removed and recreated by some other program. Report bugs to .
(Yes, I cutted off non useful options. But you can check in your SO, if want to verify all
options.)
So, ok! The information is there, but isn’t very clear. You have to match some points to
understand it. I can’t find examples using -F nor –retry… So, let’s innovate posting
about it…
The effect of F (capital) is the same of “-f file_name –retry”. It basically “still working” if
the inode of the file change. Is very useful to systems with log rotation or stuffs like
that.
Session1:
Session2:
Session1:
Session2:
oook, we truncated the file but the tail didn’t change. But if a use an appen by now?
Session1:
Session2:
Still not working, unless you restart the command… It’s because inode changed.
Session1:
Session2:
204
[root@mbdbasrvr]# tail -F test.log new_test
Session1:
Session2:
Session1:
Session2:
Session1:
Session2:
Very cool and very useful to some situations… Unfortunately just a few know it…
Let’s spread this information by sharing this post?
205
GB vs GiB | MB vs MiB | KB vs KiB
Oh man!
It’s just me or you doesn’t know about too?
Okey. Here the difference is well explained. I saw it for the first time in EMC
DataDomain interface and it sounded a little “strange”, but ok. Last week a heard a
friend talking about and decided to search… What a surprise! haha
For a DVD:
4.7 GB == 4.337 GiB
8.5 GB == 7.91 GiB
Matheus.
206
RHEL: Figuring out CPUs, Cores and
Hyper-Threading
Hi all!
It’s a recurrent subject, right? But no one is 100% sure to how figure this out… So, let
me quickly show you my way:
– Physical Cores
– Logical CPUs
Those links are similar and quite cool to understand the concepts:
https://access.redhat.com/discussions/480953
https://www.redhat.com/archives/redhat-list/2011-August/msg00009.html
http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/
hyper-threading-technology.html
Matheus.
207
Shellscript: Using eval and SQLPlus
I always liked bash programming, and sometimes need to set Bash variables using
information from Oracle tables.
To achieve that I’m using below solution, which I explain in details later.
In the first part, I call sqlplus, which select should return an string that contains valid
bash commands, to set all variables I need. In this example, sqlplus returns Database
Name and Instance Name:
OK:DBNAME=xpto; INST_NAME=xpto_1;
The second part, exists only for consistency checks. It verify if result string starts with
“OK” keywork. If all went fine, it execute the result string using the bash command
eval.
The command eval, can be used to evaluate (and execute) an ordinary string, using
the current bash context and environment. That is different than when you put your
commands in a subshell.
The below source code, reads sqlplus.log and execute every command using eval:
Cassiano.
208
Linux Basic: Creating a Filesystem
From disk to filesystem:
Rescan on scisi controller to detect the disk (controller id 0, in this example)
– List disks
fdisk -l
fdisk /dev/sdm
pvcreate /dev/sdm1
Create LV
Extend LV
Make FileSystem
mkfs.ext3 -m 0 -v /dev/vgoracle/lvoracle
OBS: m 0 is the journal (for recovery in case of crash). “0” because I don’t want it
now. So, 100% of disco will be available for using on fs.
Just to check:
209
Have a nice day!
Matheus.
210
Linux: Resizing Swap Online
Hi all!
Quick one to resize swap online:
See ya!
Matheus.
211
nc -l – Starting up a fake service
Hi everyone!
Recently i have faced a situation that made me find out a very nice and useful
command that helped me a lot, and i hope it comes to help you guys as well, and it’s
named:
nc
• Will we gonna have everything we need properly set up for the replication?
• How are we going to test the ports if nothing is up in there? Aren’t we gonna get
“connection refused”?
All you need to do is install the nc command as root (if it is not installed already):
yum install nc
nc -l
example:
I wanna make sure that on the standby server the port 7809 (Golden Gate MANAGER
port) is open. On the standby server you run:
nc -l 7809
Then, from a remote server, you are going to be able to connect through a simple
telnet command:
example:
212
telnet standby.company.com 7809
ON PRACTICE:
Trying 192.168.0.10…
standby.server {/home/oracle}: nc -l 7809
Trying 192.168.0.10…
Connected to standby.server.
Cheers!
Rafael.
213
Is My Linux Server Physical or Virtual?
Supposing you are in a server shell and don’t know if you machine is virtualized (a
VM)?
One way to check that (supposing VMWare as hypervisioning solution) is:
Matheus.
214
VMWare: Adding Shared Disks for Clustered
Oracle Database
Hi folks!
Today a friend asked about how to configure disks on VMWare to create a virtualized
cluster database. I revisited my old notes and decided to share. Here it goes…
So, why?
To prove concept, evaluate RAC configuration (caches on sequences, etc) and labs,
to learn and practice RAC stuffs…
1. Add new disk to one of the machines. Some way, one will be the “primary” and
share disks with another.
215
3. Create a specific controller to this “shared disks”
216
# Other Machine
5. Adding the existent disk to other VM (not primary, but from primary)
217
7. Create a new controller, as you made on primary and select it:
218
8. Set controller to virtual sharing
219
OBS:
If this error happen, one of your controller is not in sharing mode. Please check it.
220
VMware: Recognize Memory Addition Online
A quick script to do that:
221
Recursive string change
You want recursive change one string to another, it’s simple, you need a list with full
file name path called ‘output_list’, and run command bellow:
Keep in mind it’s a DANGEROUS command, double check your file list, and if
necessary, make a full backup from you system.
Maiquel.
222
Kludge to keep Database Alive
It’s not so pretty and Oracle has the Oracle Restart services for that. But to a
temporary and quick need, this script solve the problem:
if ps -fu oracle | grep -v grep | grep ora_smon_orcl /dev/null then echo "orcl instance is
up and running" else echo "orcl instance is down" sqlplus /nolog /dev/null 2&1 EOF
conn / as sysdba startup exit EOF fi
Matheus.
223
RHEL7: rc.local service not starting
It’s very common to automate application startup in rc.local on Linux systems.
Was testing Red Hat 7.2 (Maipo), and found that apps was’t started.
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run ‘chmod +x /etc/rc.d/rc.local’ to ensure
# that this script will be executed during boot .
touch /var/lock/subsys/local
224
-Xms1g -Xmx1g -XX:MaxPermSize=512m -Dweblogic.Name=AdminServer
-Djava.security.policy=/oracle/binaries/wlserver/server/lib/weblogic.policy .
Maiquel.
225
Mount Diretory from Remote RHEL7 Server
(NFS)
Quick Post: To mount a directory via NFS from a RHEL7 remote server:
Souce Host:
* Note: The “/bin/systemctl” is the new by RHEL7. For other versions you can just use
“service nfs restart”.
Target Host:
226
AIX: NTP Service Basics
Hi all,
I always forget the command and have to search it again. For further searches, I
expect to found in my own posts…
To start Service
startsrc -s xntpd
To stop Service
stopsrc -s xntpd
Configuration File
/etc/ntpd.conf
227
Flush DNS Cache
To flush DNS cache? Easy like that:
# Linux
1) Flush DNS – “Auto”
# Windows
1) Flush DNS
ipconfig /flushdns
For quick referece:
http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/
Matheus.
228
Flush DNS on Linux
I began posting about ORA-12514 after database migration involving DNS adjustment.
Then, to make it more clear I wrote about How to Flush DNS Cache .
Hugs!
Matheus.
229
RHEL: Adding User/Group to SSH and
SUDOERS file
Some Linux basics… To add a group or a user (this case “new_group”) to the ssh and
sudoers file:
Matheus.
230
Oracle Database: Compression Algorithms
for Cloud Backup
Hi all!
Again talking about cloud backups for on-premise databases: An important aspect is
to compress the data, so network consumption might be reduced once less data is
being transfered.
It’s also important to evaluate CPU consumption. As higher compress algorithm is, as
much CPU it uses. So, pay attention!
Now, how to choose the compression algorithm? Here the options Oracle give us:
SQL col ALGORITHM_NAME for a15 set line 200 SQL select ALGORITHM_NAME,IN
ITIAL_RELEASE,TERMINAL_RELEASE,ALGORITHM_DESCRIPTION,ALGORITHM
_COMPATIBILITY from v$rman_compression_algorithm; ALGORITHM_NAME
INITIAL_RELEASE TERMINAL_RELEASE
ALGORITHM_DESCRIPTION ALGORITHM_COMPATIB
-------------- ------------------ ------------------
---------------------------------------------------------------- ------------------ BZIP2
10.0.0.0.0 11.2.0.0.0 good compression ratio
9.2.0.0.0 BASIC 10.0.0.0.0 good compression
ratio 9.2.0.0.0 LOW 11.2.0.0.0
maximum possible compression speed 11.2.0.0.0 ZLIB
11.0.0.0.0 11.2.0.0.0 balance between speed and compression
ratio 11.0.0.0.0 MEDIUM 11.2.0.0.0 balance
between speed and compression ratio 11.0.0.0.0 HIGH
231
11.2.0.0.0 maximum possible compression ratio
11.2.0.0.0
Ok,
But how to evaluate my compression ratio?
prddb col STATUS for a10 prddb col INPUT_BYTES_DISPLAY for a15 prddb col
OUTPUT_BYTES_DISPLAY for a15 prddb col TIME_TAKEN_DISPLAY for a20 prddb
SELECT SESSION_KEY, 2 INPUT_TYPE, 3 STATUS, 4
TO_CHAR(START_TIME, 'mm/dd/yy hh24:mi') start_time, 5
TO_CHAR(END_TIME, 'mm/dd/yy hh24:mi') end_time, 6 --
ELAPSED_SECONDS / 3600 hrs, 7 COMPRESSION_RATIO, 8
INPUT_BYTES_DISPLAY, 9 OUTPUT_BYTES_DISPLAY, 10
TIME_TAKEN_DISPLAY 11 FROM V$RMAN_BACKUP_JOB_DETAILS 12 where
input_type like 'DB%' 13 ORDER BY SESSION_KEY 14 /SESSION_KEY
INPUT_TYPE STATUS START_TIME END_TIME
COMPRESSION_RATIO INPUT_BYTES_DIS OUTPUT_BYTES_DI
TIME_TAKEN_DISPLAY ----------- ------------- ---------- -------------- --------------
----------------- --------------- --------------- -------------------- 2 DB FULL
COMPLETED 04/22/16 12:59 04/22/16 13:06 6,84838963 4.26G
636.50M 00:06:57 9 DB FULL COMPLETED 04/22/16 13:47 04/22/16
13:54 6,83764706 4.26G 637.50M 00:06:37 14 DB FULL
COMPLETED 04/22/16 16:26 04/22/16 16:33 6,84189878 4.26G
637.25M 00:06:48
KB: https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmconfa.htm#BRAD
V89466
232
Done?
If you have any question, please let me know in the comments!
Matheus.
233
Oracle Database Backup to Cloud: KBHS –
01602: backup piece 13p0jski_1_1 is not
encrypted
Hi all!
I’m preparing a material about downloading, configuring using Oracle Database Cloud
Backup. My case is about backuping a local database to Cloud.
So, as avant-première for you from the Blog, a quick situation about:
# Error
Why?
To use Oracle Database Backup to Cloud you need to use at least one encrypting
method.
Oracle offers basically 3:
– Password Encryption
234
– Transparent Data Encryption (TDE)
– Dual-Mode Encryption (a combination of password and TDE).
In this post I refered the easier, by I recommend you to take a look on KB:
https://docs.oracle.com/cloud/latest/dbbackup_gs/CSDBB.pdf
Matheus.
235
RMAN Raise ORA-19913 ORA-28365 On
Restore from Cloud Backup
First I think was some error with Database Backup To Cloud, when testing. Then I
realized it was a simple mistake by myself.
It hapen again?
This point I suspect some kind of bug… But it was my mistake and is not related to
Cloud, but to Encryption use. To undestand:
For Backup: Use ENCRYPTION
For Restore/Recover: Use DECRYPTION
236
Obviously, but take me a minute to realize…
See ya!
Matheus.
237
UnknownHostException: Could not
authenticate to Oracle Database Cloud
Backup Module
Hi all!
When running Oracle Database Cloud Backup Module, found this error:
Command:
Error:
Solution:
Set Relication Policy of Oracle Storage Cloud Service.
In My Services Home, Oracle Storage Cloud Service will have a link to “Set Retention
Policy”. It’s simply set it.
But pay attention, once you select a replication policy, you can’t change it.
238
As you can see, I already did it:
KB:
Problems with Installing the Backup Module
Selecting a Replication Policy for Oracle Storage Cloud Service
See ya!
Matheus.
239
Cloud Computing Assessment – Free
Hi folks!
I’ve been away a few days, right? My bad. I’m sorry.
But I have a good new. I’m preparing a new site where the content of this blog will be
more efficiently allocated. Of course, the daily posts will continue. You’ll like it, I
promise.
By now, I’d suggest you to make this assessment about Cloud Computing provided by
Cloud-Institute.org .
The questions themselves generate some questions for reflection. Follow the link:
http://cloud-institute.org/cloud-open-exam.html
See ya!
Matheus.
240
Monitoring MySQL with Nagios – Quick View
Hi all!
As you know, we have some commercial solutions to monitoring/alerting MySQL, like
MySQL Enterprise Monitor or Oracle Grid/Cloud Control.
But, regarding we are using MySQL instead of Oracle Database, we can assume it’s
probably a decision taken based on cost. So, considering Open Source solutions, we
basically have Nagios, Zabbix, OpenNMS…
1. check_mysql.pl
– Check status of MySql server (slow queries, etc)
– Queries per second graph
2. check_db_query.pl
– Allowes to run SQL Queries and setting thresholds for warning e critical. Ex:
check_db_query.pl -d database -q query [-w warn] [-c crit] [-C conn_file] [-p
placeholder]
241
define command{ command_name check_db_entries command_line
/usr/local/bin/perl $USER1$/check_db_query.pl -d "$ARG1$" -q "$ARG2$" $ARG3$ }
So, now it’s just make your queries and implement your free monitoring on MySQL!
Matheus.
242
Optimize fragmented tables in MySQL
It happens on MySQL, as you know. Run an Optimize Table solve the question.
BUT , be careful! During the optimize the table stay locked (writing is not possible).
(Fragmented Table)
So what?
To not cause a lock in every table, the script below shows and runs (if you want to list
but not run, comment the line) only for tables that have fragmentation.
#!/bin/sh echo -n "MySQL username: " ; read username echo -n "MySQL password: " ;
stty -echo ; read password ; stty echo ; echo mysql -u $username -p"$password" -NBe
"SHOW DATABASES;" | grep -v 'lost+found' | while read database ; do mysql -u
$username -p"$password" -NBe "SHOW TABLE STATUS;" $database | while read
name engine version rowformat rows avgrowlength datalength maxdatalength
indexlength datafree autoincrement createtime updatetime checktime collation
checksum createoptions comment ; do if [ "$datafree" -gt 0 ] ; then
fragmentation=$(($datafree * 100 / $datalength)) echo "$database.$name is
$fragmentation% fragmented." mysql -u "$username" -p"$password" -NBe "OPTIMIZE
TABLE $name;" "$database" fi done done
MySQL username: root MySQL password: ... mysql.db is 12% fragmented. mysql.db
optimize status OK mysql.user is 9% fragmented. mysql.db optimize status OK ...
243
This script is a full copy from this post by Robert de Bock .
Thanks, Robert!
Matheus.
244
MySQL Network Connections on
‘TIME_WAIT’
Hi all!
Recently I caught a bunch of connections in ‘TIME_WAIT’ on a MySQL Server through
‘netstat – antp 3306’…
After some time, we identified this was caused by the environment not using DNS,
only fixed IPS (uuugh!)…
As you know, for security measures MySQL maintains a host cache for connections
established. From MySQL docs:
“For each new client connection, the server uses the client IP address to check
whether the client host name is in the host cache. If not, the server attempts to resolve
the host name. First, it resolves the IP address to a host name and resolves that host
name back to an IP address. Then it compares the result to the original IP address to
ensure that they are the same. The server stores information about the result of this
operation in the host cache. If the cache is full, the least recently used entry is
discarded.”
9.12.6.2 DNS Lookup Optimization and the Host Cache
For this reason, there is a DNS ‘reverse’ lookup for each login was hanging this
connections.
The solution?
Right way: Add an A type registry in DNS for the hosts. Use DNS!
Quick way: Add on /etc/hosts from database server the mapping for the connected
hosts, avoiding the DNS Lookup.
Quicker way: Setting the skip-name-resolve variable at /etc/my.cnf. This variable
avoids this behavior in database layer for new connections and solve the problem.
See ya!
Matheus.
245
MySQL: Difference Between current_date(),
sysdate() and now()
Do you know the difference?
Take a look between the functions now() and sysdate() after executing sleep of 5
seconds…:
Matheus.
246
Getting today’s Errors and Warnings from
MySQL log
Quick one!
# Warnings
# Errors
And a Bonus!
To get entries from X days ago:
Matheus.
247
MySQL: Unable to connect to database ‘xxx’
on server ‘xxx’ on port xx with user ‘root’
Quick tip:
# Problem:
Solution:
248
Say Hello to Oracle Apex and for the new
Blog member too!
Hi people!
That’s my first post and I would like introduce me and this great tool what I will talk
about here, at this site, but Let’s start about Oracle Application Express, or Apex,
which is probably your most intention here! You can read about my history with Apex
in the end of this article.
Oracle Apex is a development tool that enables you to build applications using only
your web browser, using basically PL-SQL, this familiarity helps to create
departmental applications. Even DBA’s can create good web applications easily. Apex
is not a new tool, was first released on 2006, and previously it was called HTML DB.
After 10 years of development, offers a modern IDE, and with more dedication, you
can use to build complex solutions using CSS and JavaScript.
Apex also comes with a entire system to manage your development life cycle. Using
the Team Development it is possible to track your project progress from brainstorm to
tracking bugs and continuous maintenances.
You can start using and testing Oracle Apex right now, just accessing
apex.oracle.com and creating your own workspace. Just click Get Started and select
Free Workspace. Remember that should be used for educational propose.
By the way, in the next weeks and some articles from now, I intend to write about how
to create an entire application, describing most of standards options and explaining
Oracle Apex in details.
249
I start using Apex in version 2, when the standard templates produces applications
that looked like Enterprise Manager some years ago. The latest version 5.02 was
released in October 2015. The Apex 5 has a revolutionary IDE, which is in the same
way powerful, intuitive, clean and easy to use.
Version 2
Version 5
Enjoy and welcome to Apex World! There is an active community on OTN that
supports mostly users needs and questions through discussion web forums.
Cassiano.
250
Understanding Apex URL
An basic step into Apex development is to understand URL syntax.
I keep this note in my favorites folder, to check anytime.
http://apex.oracle.com/ords/f?p=4350:1:220883407765693447
or
f?p=App:Page:Session:Request:Debug:ClearCache:itemNames:itemValues:PrinterFri
endly
where
• Request - A keyword that you can use to react in your process workflow. When
you press a button, request will be set to button action name, e.g. when press
Submit or Next page, your Request variable should have “submit” value.
• Debug - Set this flag to YES to increase log level (must be uppercase).
• ClearCache - Specify the numeric page number to clear cached items on a single
page, this flag set all item’s values to null. To clear cached items on multiple
pages, use a comma-separated list of page numbers. Clearing a page’s cache
also resets any stateful processes on the page.
251
javascript:apex.confirm
The most simple way to ask for your user attention, is to popup a javascript browser
question. Something like “Do you really wanna proceed?”
In the APEX world, just remember You do not need to reinvent the wheel!
Let’s use the native apex javascript Api, that comes with the function named Confirm ,
which ask user for a confirmation, before to submit page or before run some process.
Easy Example
First, select the button you want this behavior, then set the property Target to URL.
Second, set the target url to below javascript code, and don’t forget to adapt the
message for your need’s.
The second parameter can be used to set the value of REQUEST, when the page is
submitted. You can use this value selectively run some Process point, setting the
property Condition to “when request = value”.
Complex Example
For more complex needs, you can set Apex Items values, before to proceed with page
submit. In this case, the second parameter should be an Object, with all items and
values necessary for your page flow and correct process.
Cassiano.
252
APEX: Let’s Talk About Charts Attributes
(Inverted Scale)
Hello! If you had play with Apex before, you know how easy is to build a simple report
to present your data. But sometimes, your boss will ask you to build something more
“graphical” or with a better design. But I never thought in color themes or pictures
when I developed my simple reports in Sqlplus. Those colorful themes and design
things are, most of the times, not familiar for DBA’s.
Thinking on that, I decide to write this article, always focusing in the standard Chart
plugin that comes with Apex by default. Take a look on below chart.
253
First of all, to change Chart attributes, you must select in the left side, the item named
“Attributes”. Only in this way you will see all chart properties, on the box at right side of
Apex Development IDE.
After that, you should see Chart attributes in the right side box, like below pic:
Rendering - Apex5 comes with Html5 plugin, prefer this instead of old flash charts.
Html5 are mobile friendly template, and should run better in modern browsers, with is
standard right now.
Show Grid - Which lines should be rendered? By default, chart shows only vertical
lines. You can choose here, to display horizontal lines as well as secondary gray lines,
between black main lines.
254
Marker - You could change the marker for each serie, making the chart more clear.
Several options are available: squares, circles, cross lines and many others. In the
example I use Diamond marker.
Next challenge? I was asked how to invert the graphic, because their data represent
‘errors’, customer ask for lower values be on top of the list. My first ideia was to use
math and multiply results for (-1 ). This way, graph line is inverted as necessary, but
values don’t represent correct values.
The correct way to do it, is modifying X axis properties. Let’s take a look into available
Axis properties.
Title, Prefix/Postfix - Title doesn’t need explanation. Other modify how every value
and hint are rendered in chart canvas.
Label Rotation - to write label in top-down or even with inclination, like below
example.
Invert Scale! Here is our wonder! Modify to change your chart scale, and achieve my
customer needs.
Major/Minor Interval - Specify how much space between major (black) and minor
(gray) lines in the chart.Check the results. As you can see, in this example I inverted
scale in both X and Y axis.
255
That is it folks! In next articles, I’ll write more about Chart styles and customizations!
Have a nice week.
Cassiano.
256
Script: Copy Large Table Through DBLink
To understand the situation:
Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with
minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate
the data logically (not physically).
The first option was to migrate schema by schema through datapump with database
link. There is no long columns.
Problem1: The database was veeery slow with perfect match to Bug 7722575
-DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to
statistics and all things documented. It doesn’t work.
Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/
Done? Not for all. For some tables, just happened the error:
DECLARE counter number; CURSOR cur_data is select row_id from ( select row_id,
num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num) where num
= &1 and num =&2; BEGIN counter :=0; FOR x IN cur_data LOOP BEGIN counter :=
257
counter +1; insert into SCHEMA_OWNER.TABLE select * from
SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id; if counter = 1000
then ---commit every 1000 rows commit; counter := 0; end if; EXCEPTION when
OTHERS then dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm); END; END
LOOP; COMMIT; END; / exit;
3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50
million rows – and commits by every 1k, defined on item 2) :
@echo off set /p db="Target Database.: " set /p user="Username.......: " set /p
pass="Password..................: " pause START sqlplus %user%/%pass%@%db%
@run_chunck.sql 1 2060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 2060054 52060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 52060054 102060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 102060054 152060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 152060054 202060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 202060054 252060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 252060054 302060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 302060054 352060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 352060054 402060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 402060054 452060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 452060054 502060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 502060054 552060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 552060054 602060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 602060054 652060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 652060054 702060053 -- count(*) from table
258
message from client MATHEUS_BOESING SQL*Net message to client
c7a5tcc3a84k6
Matheus.
259
Oracle Convert Number into Days, Hours,
Minutes
There’s a little trick…
Today I had to convert a “number” of minutes into hours:minutes format. Something
like convert 570 minutes in format hour:minutes. As you know, 570/60 is “9,5” and
should be “9:30”.
FORMATED
——–
23:59:59
Ok, it works. But using “seconds past midnight” (sssss). By the way, it works between
0 and 86399 only:
The problem remains. How to use minutes in 3 digits (570 minutes - 9:30), for
example?
The best way I solve was:
It always works.
boesing@dbselect
2 TO_CHAR(TRUNC(86399/3600),'FM9900') || ':' || -- hours
3 TO_CHAR(TRUNC(MOD(86399,3600)/60),'FM00') || ':' || -- minutes
4 TO_CHAR(MOD(86399,60),'FM00') -- second
5 from dual;
260
TO_CHAR(TRUNC
————-
23:59:59
boesing@dbselect
2 TO_CHAR(TRUNC(570/3600),’FM9900′) || ‘:’ || — hours
3 TO_CHAR(TRUNC(MOD(570,3600)/60),’FM00′) || ‘:’ || — minutes
4 TO_CHAR(MOD(570,60),’FM00′) — second
5 from dual;
TO_CHAR(TRUNC
————-
00:09:30
boesing@dbselect
2 TO_CHAR(TRUNC(MOD(570,3600)/60),’FM00′) || ‘:’ || — hours
3 TO_CHAR(MOD(570,60),’FM00′) — minutes
4 from dual;
TO_CHAR
——-
09:30
Matheus.
261
Purge SYSAUX Tablespace
Your SYSAUX is bigger than the rest of database?
It’s not uncommon to “old” databases, usually bad administrated. Some databases
configuration must cause this situation.
The general indication is to review stats and reports retention of objects and database.
262
Matheus.
263
Statistics not Being Auto Purged – Splitting
Purge
Hi all!
The post Purge SYSAUX Tablespace , made on Fabruary 8this, is yet being high
accessed. So, if you’re interested, here it goes another post about:
Last week I supported a database was not purging statistics through MMON job,
because is timeouting. Worst than simply that, the database is not purging statistics
since 2012 and SYSAUX was huge!
To understand: By default, the MMON performs the automatic purge that removes all
history older than:
1) current time – statistics history retention (by default 31 days)
2) time of recent analyze in the system – 1
MMON performs the purge of the optimizer stats history automatically, but it has an
internal limit of 5 minutes to perform this job. If the operation takes more than 5
minutes, then it is aborted and stats not purged.
Unexpected error from flashback database MMON timeout action Errors in file
/oracle/diag/rdbms/oracle/trace/oracle_mmon_1234567.trc: ORA-12751: cpu time or
run time policy violation
You can still follow the post Purge SYSAUX Tablespace . It solves the question and
implement the “shrinks”.
But for an huge database it might take some time… And, occasionally you might to do
it on maintenance windows in more than one part… So, this can help you:
Script to purge day by day (max 2.000 days ~5 years per execution :P):
set serveroutput on size unlimited set time on set timing on spool purge_stats.log
declare vRetentionLimit Date; vOldestStat Date := to_date('13/02/2012
00:00','dd/mm/yyyy hh24:mi'); -- inform oldest stats date vStopExecuting Date :=
to_date('29/04/2016 08:30','dd/mm/yyyy hh24:mi'); -- inform maintance windows
264
ending begin select to_date(sysdate-dbms_stats.get_stats_history_retention) into
vRetentionLimit from dual; for i in 1..2000 loop if sysdate=vStopExecuting then exit;
end if; if vOldestStat = vRetentionLimit then
dbms_output.put_line(to_char(sysdate,'dd.mm.yyyy hh24:mi:ss') || ' - Purging from: ' ||
to_char(vOldestStat,'dd.mm.yyyy hh24:mi:ss')); dbms_stats.purge_stats(vOldestStat);
dbms_output.put_line(to_char(sysdate,'dd.mm.yyyy hh24:mi:ss') || ' - Purged from: ' ||
to_char(vOldestStat,'dd.mm.yyyy hh24:mi:ss')||chr(13)||chr(10) ); end if;
vOldestStat:=vOldestStat+1; end loop; end; / spool off
This way, the purge can be splitted on day-by-day windows. Now you can make the
moves and rebuilds told on Purge SYSAUX Tablespace
265
Sqlplus: Connect without configure
TNSNAMES
Okey, you must to know, but is always useful to remmember that… If you don’t want
to configure your TNSNAMES, you can connect directly to description of your
database. This way:
Based on this, I made two scripts, to connect with the sid (c.sql) or with the
service_name (s.sql) and make my life easier. Here the scripts:
sqlplusget c 1 DEFINE VHOST = &1. 2 DEFINE VPORT = &2. 3 DEFINE VSID = &3.
4 DEFINE VDESC='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=T
CP)(HOST=&VHOST;)(PORT=&VPORT;)))(CONNECT_DATA=(SID=&VSID;)(server= dedicated)))' 5 discon
set linesize 1000 8 set sqlprom '&&VSID; ' 9 select instance_name, host_name 10
from v$instance; 11 exec
dbms_application_info.SET_MODULE('MATHEUS_BOESING','DBA'); 12 alter
session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; 13 UNDEFINE VDESC 14
UNDEFINE 1 15 UNDEFINE 2 16* UNDEFINE 3 sqlplusget s 1 DEFINE VHOST =
&1. 2 DEFINE VPORT = &2. 3 DEFINE VSID = &3. 4 DEFINE VDESC='(DESCRIPTI
ON=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=&VHOST;)(PORT=& VPORT;)))(CONNEC
matheus_boesing@&&VDESC; 8 set linesize 1000 9 set sqlprom '&&VSID; ' 10 select
instance_name, host_name 11 from v$instance; 12 exec
dbms_application_info.SET_MODULE('MATHEUS_BOESING','DBA'); 13 alter
session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; 14 UNDEFINE VDESC 15
UNDEFINE 1 16 UNDEFINE 2 17* UNDEFINE 3 sqlplus
Ok, but, let’s suppose you are working in a cluster and wants to connect directly to the
another instance. I made the script below (ci.sql). It’s not beautiful, but is a lot hopeful:
sqlplus get ci 1 DEFINE VINT = &1. 2 undefine VHOST 3 undefine VSID 4 VARIABLE
VCONN varchar2(100) 5 PRINT ret_val 6 BEGIN 7
SELECT '@c '||host_name||' 1521 '||INSTANCE_NAME 8 INTO :VCONN 9 FROM
gv$instance where INSTANCE_NUMBER=&VINT; 10 END; 11 / 12 set head off; 13
spool auxcon.sql 14 prompt set head on; 15 print :VCONN 16 prompt set head on; 17
spool off; 18* @auxcon sqlplus
266
As you see, you inform the inst_id you want to connect. It can be used like:
That’s right?
The scripts a shared help me a lot every day, and it’s exclusive.
I not founded nothing like this. So, I made.
Matheus.
267
ASM: Disk Imbalance Query
It can be useful if you work frequently with OEM metrics…
# OEM Query
# MatheusDBA Query
Matheus.
268
Rebuild all indexes of a Partioned Table
Another quick post!
Regarding you frequently need to collect all indexes of a partioned table (local and
global indexes), this is a quick script that make the task a little bit easier:
begin
-- local indexes
for i in (select p.index_owner owner, p.index_name, p.partition_name
from dba_indexes i, dba_ind_partitions p
where i.owner='&OWNER;'
and i.table_name='&TABLE;'
and i.partitioned='YES'
and i.visibility='VISIBLE' -- Rebuild only of the visible indexes, to get real effect :)
and p.index_name=i.index_name
and p.index_owner=i.owner
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild
partition '||i.partition_name||' online parallel 12'; -- parallel 12 solve most of the
problems
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- If you don't
use parallel indexes in your database, or the default parallel of the index, or what you
want...
end loop;
-- global indexes
for i in (select i.owner owner, i.index_name
from dba_indexes i
where i.owner='&OWNER;'
and i.table_name='&TABLE;'
and i.partitioned='NO'
and i.visibility='VISIBLE' -- same comment
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild online parallel 12';
-- same
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- same :)
end loop;
end;
/
Matheus.
269
Solving Simple Locks Through @lock2s and
@killlocker
Hi guys!
This post is to show the most simple and most common kind of locks for objects and
the simpliest way to solve it (killing the locker).
It’s so common that I scripted it. Take a look:
You can identify the Locker by LMODE column. And all his Waiters by REQUEST
column marked by not ‘NONE’, below each Locker…
greporadb @killlocker
'ALTERSYSTEMKILLSESSION'''||SID||','||SERIAL#||'''IMMEDIATE;' ------------------------
------------------------------------------------------------------------------------------------------------------
-------------------------------------- alter system kill session '252,63517' immediate; alter
system kill session '354,18145' immediate; 2 rows selected. greporadb alter system
kill session '252,63517' immediate; System altered. greporadb alter system kill
session '354,18145' immediate; System altered. greporadb @lock2s no rows
selected
Solved!
My magic scripts? Here it goes:
get lock2s.sql:
set lines 10000 set trimspool on col serial# for 999999 col lc_et for 999999 col l1name
for a50 col lmode for a6 col username for a25 select /*+ rule */ distinct
b.inst_id,a.sid,b.serial#,b.username,b.status, --b.audsid, --b.module,
--b.machine,b.osuser, b.logon_time,
decode(lmode,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',lmode) lmode,
270
decode(request,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',request) request,
b.last_call_et LC_ET,a.type TY,a.id1,a.id2, d.name||'.'||c.name
l1name,a.ctime,b.lockwait,b.event --distinct
b.inst_id,a.sid,b.username,a.type,d.name||'.'||c.name l1name,a.id1,a.id2,
--decode(lmode,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',lmode) lmode,
--decode(request,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',request)
request,a.ctime,b.lockwait,b.last_call_et from gv$lock a, gv$session b,sys.obj$
c,sys.user$ d,(select a.id1 from gv$lock a where a.request 0) lock1 where a.id1 =
c.OBJ# (+) and a.sid = b.sid and c.owner# = d.user# (+) and a.inst_id=b.inst_id and
b.username is not null and a.id1 = lock1.id1 order by id1,id2, lmode desc /
get killlocker.sql:
Matheus.
271
ORA-04091: Table is Mutating,
Trigger/Function may not see it
No!
This is not a super-table nor a x-table (X-Men joke, this was awfull, I know… I’m
sorry).
Very interesting. But not hard to understand. The cause is that the trigger (or a user
defined plsql function that is referenced in this statement) attempted to look at (or
modify) a table that was in the middle of being modified by the statement which fired it.
In other words, your trying to read a data the you are modifying. The obviously cause
an inconsistency, the reason to this error. The data is “mutant”. But the error could be
less annoying, right? Oracle and his jokes…
The solution is to rewrite the trigger to not use the table, or, in some situation, you can
use an autonomous transaction, to turn it independent. It can be done using the
clause PRAGMA AUTONOMOUS_TRANSACTION.
Matheus.
272
ORA-12014: table does not contain a
primary key constraint
Ok, you are trying to create a materialized view involving a database link and found
a ORA-12014, right?
It blowed me sometime ago. But it’s not complicated to workaround it, just try to:
And
PS: Make sure username used in remote_db database link has select privileges on
MV log. On source db issue:
This will give you MV log table name. On target side issue:
See ya!
Matheus.
273
ORA-02062: distributed recovery
# Error/Alert
# Solution
Matheus.
274
Windows: “ORA-12514” After Database
Migration/Moving (Using DNS Alias)
It’s usual to use DNS Aliases pointing to scanlistener. This way, we create an
abstraction/layer bewteen clients/application and the cluster where database is. Some
activities like tierization/consolidation and database moving between clusters
(converting to Pluggable, etc), would be much more transparent.
Buuuut, if after a database migration, all the services online and listening, your client is
stucking with:
Remmember you are using DNS to make this layer. Have you tried to flush DNS
Cache?
I faced this problem with a Windows Application. The solution:
Matheus.
275
RS-7445 [Serv MS leaking memory] [It will
be restarted] [] [] [] [] [] [] [] [] [] []
Hello!
Having this error from cell alerthistory.log? Don’t panic!
Take a look in MOS: Exadata Storage Cell reports error RS-7445 [Serv MS
Leaking Memory] (Doc ID 1954357.1) . It’s related to Bug – RS-7445 [SERV MS
LEAKING MEMORY] .
The issue is a memory leak in the Java executable and affects systems running with
JDK 7u51 or later versions. This is relevant for all versions in Release 11.2 to 12.1.
What happens is that MS process is consuming high memory (up to 2GB). Normally
MS use around 1GB but because of the bug the memory allocated can grow upt to
2GB. You can check it as per example below:
Note that: 267080 * 4096 = 1143MB (1GB) . If your number is higher than this, it
indicates the presence of the bug.
In case you want to see the memory in use by MS processes, it can be seen with this
command from any DB node:
This error is ignorable, once MS will restart automatically, reseting process and
memory. There is no impact on services, this is just the monitoring process.
276
kernel.panic_on_oops: New Oracle 12c
Installation Requirement
Hi all,
Do you know what mean the parameters on installing 12c?
This parameter controls the kernel’s behaviour when an oops or bug is encountered:
• 1: panic immediately. If the `panic’ sysctl is also non-zero then the machine will
be rebooted.
OOPS is a deviation from correct behavior of the Linux kernel, one that produces a
certain error log.
The better-known kernel panic condition results from many kinds of oops, but other
instances of an oops event may allow continued operation with compromised
reliability.
This is recommended in a system where we want to have node evicted in case of any
hardware failure or any other issue.
kernel.panic_on_oops = 1
sysctl -p
KB: https://www.kernel.org/doc/Documentation/sysctl/kernel.txt
Matheus.
277
Tip for the Future: Segmentation fault
because of LD_LIBRARY_PATH
More than once I forgot to set LD_LIBRARY_PATH in new environments and
sometimes I faced awkward errors. The most common is “Segmentation Fault”.
Today a lost almost 15 minutes searching about Segmentation Fault related to
Datapump on 11.2, then I realized I forgot the LD_LIBRARY_PATH again…
Other day, in a Upgrade from 11.2.0.3.6 to 11.2.0.4.2 I get stuck in lots of errors on
upgrade process. Bullshit again, after a few minutes of errors and searching I founded
a post, somewhere, talking about the variables setting.
So, Matheus from the Future : Check if LB_LIBRARY_PATH and other variables are
setted for the right Oracle Home.
I expect this post save me from this same pain in the future.
Thanks.
Matheus.
278
ORA-02296: cannot enable (string.) – null
values found
Hi all!
Found the error below?
greporadb alter table TABLE_TEST modify COLUMN_TEST not null; alter table
TABLE_TEST modify COLUMN_TEST not null * ERROR at line 1: ORA-02296:
cannot enable (MATHEUSDBA.) - null values found
It happen basically because you have null values for this column. Let’s check:
Ok doke!
Now, what can we do?
1) Fix the problem updating the null values to a value (or a dummy value).
2) Use NOVALIDATE clause, like:
279
(12c) RMAN-07539: insufficient privileges to
create or upgrade the catalog schema
Another “The problem - the fix” post.
# KB:
Upgrade Recovery Catalog fails with RMAN-07539: insufficient privileges (Doc ID
1915561.1)
Unpublished Bug 17465689 – RMAN-6443: ERROR UPGRADING RECOVERY
CATALOG
# Problem
# Solution
– Connect on the catalog database with the 12c (local) OH:
(and don’t worry about the error on alter session).
280
[oracle@databasesrvr dbs]$ rman target / Recovery Manager: Release 12.1.0.2.0 -
Production on Tue Jul 21 14:21:27 2015 Copyright (c) 1982, 2014, Oracle and/or its
affiliates. All rights reserved. connected to target database: MYDB (not mounted)
RMAN connect catalog catalog_mydb/catalog_mydb@catalogdb connected to
recovery catalog database PL/SQL package CATALOG_MYDB.DBMS_RCVCAT
version 11.02.00.03 in RCVCAT database is too old RMAN upgrade catalog; recovery
catalog owner is CATALOG_MYDB enter UPGRADE CATALOG command again to
confirm catalog upgrade RMAN upgrade catalog; recovery catalog upgraded to
version 12.01.00.02 DBMS_RCVMAN package upgraded to version 12.01.00.02
DBMS_RCVCAT package upgraded to version 12.01.00.02.
Matheus.
281
ORA-27302: failure occurred at:
sskgpcreates
# Error:
[root@dbsrvr2 ~]# cat /etc/sysctl.conf |grep sem kernel.sem = 250 32000 100 142
[root@dbsrvr2 ~]# vi /etc/sysctl.conf [root@dbsrvr2 ~]# cat /etc/sysctl.conf |grep sem
kernel.sem = 250 32000 100 256 [root@dbsrvr2 ~]# sysctl -p
Well done!
Matheus.
282
ORA-15081: failed to submit an I/O operation
to a disk
After some disk and a instance of RAC lost, the database was stuck with ORA-15081.
A recover was needed. #StayTheTip
# Error
# Solution
SQL recover database; Media recovery complete. SQL alter database open; Database
altered.
Matheus.
283
PRCR-1079 CRS-2674 CRS-5017 ORA-27102:
out of memory Linux-x86_64 Error: 28: No
space left on device
# Problem
# Solution
On /etc/sysctl.conf ajust as below and then reload sysctl (“sysctl -p” as root):
Matheus.
284
ORA-06512 ORA-48168 ORA-12012 for ADR
Job Raising Errors
ORA-06512 ORA-48168 ORA-12012 for ADR Job Raising Errors
A database is raising stack below on alertlog:
The note ORA-12012 And ORA-48168: ADR Sub-system Is Not Initialized (Doc ID
1601769.1) is indicating to make maintenance involving database shutdown… But I
don’t want to.
The note Getting Error In Alert Log ORA-51108: Unable To Access Diagnostic
Repository – Retry Command (Doc ID 1586736.1) indicates to recrate Health
Monitor Information, through:
As I said, the Diag is not Enabled. So, the easiest “workaround” is to just disable the
job:
See ya!
Matheus.
285
x$kglob: ORA-02030: can only select from
fixed tables/views
Hi all!
While selecting on x$kglob with DBA credentials hanging on:
SQL select count(*) from sys.x$kglob; ERROR at line 1: ORA-00942: a tabela ou view
não existe
SQL grant select on sys.x$kglob to dba; grant select on sys.x$kglob to dba * ERROR
at line 1: ORA-02030: can only select from fixed tables/views
Matheus.
286
RHEL5: Database 10g Installation –
Checking operating system version error
Everything is old. RHEL and Database versions. But can be useful if you are preparing
a nonprod lab of your legacy env, right?
Matheus.
287
ORA-10456: cannot open standby database;
media recovery session may be in progress
Easy, easy… Take a look:
# Error
# Solution
db2database2p:sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jun 4 20:27:46 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL startup
ORACLE instance started.
Total System Global Area 1.1224E+11 bytes
Fixed Size 2234920 bytes
Variable Size 6.1472E+10 bytes
Database Buffers 5.0466E+10 bytes
Redo Buffers 299741184 bytes
Database mounted.
ORA-10456: cannot open standby database; media recovery session may be in
progress
288
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit
Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options
Matheus.
289
ORA-28004: invalid argument for function
specified in
PASSWORD_VERIFY_FUNCTION
An unexpected error, right?
Matheus.
290
ORA-27369: job of type EXECUTABLE failed
with exit code: Operation not permitted
When running external script by scheduler. The solution:
291
Package Body
APEX_030200.WWV_FLOW_HELP Invalid
after Oracle Text Installing
Hi all!
The package body APEX_030200.WWV_FLOW_HELP become invalid after Oracle
Text installation with the follow errors:
It happens bassically because APEX schema has not been granted with execute
privileges for CTX_DDL and CTX_DOC. The note below it’s exactly about it:
The WWV_FLOW_HELP PACKAGE Status is Invalid After Installing Oracle Text
(Doc ID 1335521.1)
292
ORA-12012: error on auto execute of job
“SYS”.”BSLN_MAINTAIN_STATS_JOB”
Hi all,
Evaluating a database I detected it was failing to execute the default scheduler job
SYS.BSLN_MAINTAIN_STATS_JOB. This job is an Oracle defined automatic moving
window baseline statistics computation job, that runs only in weekends.
Below the last stack error in the alert log:
According the notes below, the recommended action is to recreate the DBSNMP
component:
Bug 10110625 – DBSNMP.BSLN_INTERNAL reports ORA-6502 running
BSLN_MAINTAIN_STATS_JOB (Doc ID 10110625.8)
ORA-12012: Error on Auto Execute of job SYS.BSLN_MAINTAIN_STATS_JOB
(Doc ID 1413756.1)
KEWBMBTA: Maintain BSLN Thresholds Failed, Check For Details. (Doc ID
1490391.1)
However, it’s a process that can affect other mechanisms. So, I found the follow note
with the same error pointing to a privilege issue:
Ora-06508: Pl/Sql: Could Not Find Program Unit Being Called:
“DBSNMP.BSLN_INTERNAL” (Doc ID 1323597.1)
293
But after granting the privilege as workaround suggested, the fail remais…
After that, while I was quering on DBSNMP, I realized another instance name active in
DBSNMP.BSLN_BASELINES.
I guess this database was created with another instance name and then renamed
without DBNID.
So, I deleted the row and the job started to run successfully:
294
Execution logs:
MYDB select * 2 from (select owner, job_name, log_date, status, run_duration
3 from dba_scheduler_job_run_details a 4 where job_name
= 'BSLN_MAINTAIN_STATS_JOB' 5 order by log_date) 6 where rownum 10;
OWNER JOB_NAME LOG_DATE
STATUS RUN_DURATION ------------------------------ -------------------------
----------------------------------- --------------- --------------- SYS
BSLN_MAINTAIN_STATS_JOB 03/04/16 00:00:08,484972 +00:00 FAILED
+000 00:00:08 SYS BSLN_MAINTAIN_STATS_JOB 10/04/16
00:00:07,943598 +00:00 FAILED +000 00:00:07 SYS
BSLN_MAINTAIN_STATS_JOB 17/04/16 00:00:08,486526 +00:00 FAILED
+000 00:00:08 SYS BSLN_MAINTAIN_STATS_JOB 24/04/16
00:00:10,067848 +00:00 FAILED +000 00:00:09 SYS
BSLN_MAINTAIN_STATS_JOB 29/04/16 13:58:10,779201 +00:00 FAILED
+000 00:00:01 SYS BSLN_MAINTAIN_STATS_JOB 29/04/16
14:01:04,162900 +00:00 SUCCEEDED +000 00:00:00
Matheus.
295
Materialized View with DBLink: ORA-00600:
internal error code, arguments: [kkzuasid]
Hello guys!
Not being able to refresh you Materialized View because of this error?
The bad new is there is no workaround (I usually prefer workaround for this, is quicker
and less complicated).
But the good new is there is a patch for this: Patch 17705023 : ORA-600
[KKZUASID] ON MV REFRESH
This error is related to a defect when trying to refresh a materialized view and
using Query Rewrite in RDBMS 11.2.0.4, and is fixed in 12.2 ( Bug 17705023 :
ORA-600 [KKZUASID] ON MV REFRESH ).
You can find more info in MOS Bug 17705023 – ORA-600 [kkzuasid] on MV refresh
(Doc ID 17705023.8) .
In my situation, as per documentation, I applied the patch and solved the situation as
quick as possible. But reviewing the situation to write this post, specially about Query
Rewrite feature , I see you maybe can recreate you materialized view with hint
NOREWRITE OR setting parameter QUERY_REWRITE_ENABLED to false and have
a shot. Maybe an undocumented Workaround?
Matheus.
296
OUI: RHEL Permission Denied error
Another quick tip about running DBCA:
# Error:
# Solution:
Ok doke?
Matheus.
297
ORA-19751: could not create the change
tracking file
Let’s make it simple to solve the problem:
# Error:
SQL alter database open; alter database open * ERROR at line 1: ORA-19751: could
not create the change tracking file ORA-19750: change tracking
file: '+DGDATA/mydb/changetracking/ctf.470.859997781' ORA-17502: ksfdcre:1
Failed to create file +DGDATA/mydb/changetracking/ctf.470.859997781 ORA-17501:
logical block size 4294967295 is invalid ORA-15001: diskgroup "DGDATA" does not
exist or is not mounted ORA-17503: ksfdopn:2 Failed to open file
+DGDATA/mydb/changetracking/ctf.470.859997781 ORA-15001: diskgroup
"DGDATA" does not exist or is not mounted ORA-15001: diskgroup "DGDATA" does
not exist or is not mounted
# Solution:
SQL alter database disable BLOCK CHANGE TRACKING; Database altered. SQL
alter database open; Database altered.
Then, after everything be OK, you fix the situation recrating a BCTF:
MTFBWU!
Matheus.
298
ORA-01548: active rollback segment found,
terminate
# Problem
# Solution
# Why?
The UNDO_MANAGEMENT is set as ‘MANUAL’, right? To drop any undo the default
UNDO must have at least one segment.
Matheus.
299
RMAN-06059: expected archived log not
found
# Error
# Solution
First of all, you need to know which files exists or not:
It’s hardly recommended that you make a full backup after that, to ensure you have a
recoverable state.
Matheus.
300
ORA-29760: instance_number parameter not
specified
I felt myself stupid when I lost a few minutes to undestand this error:
Matheus.
301
ORA-00600: internal error code, arguments:
[ktecgetsh-inc], [2]
Alert showing:
So,
alter system set event="10061 trace name context forever, level 10" scope=spfile;
See ya!
Matheus.
302
ORA-10456: cannot open standby database;
media recovery session may be in progress
A dataguard quick tip!
# Error
SQL ALTER DATABASE OPEN READ ONLY; ALTER DATABASE OPEN READ
ONLY * ERROR at line 1: ORA-10456: cannot open standby database; media
recovery session may be in progress
# Solution
See ya!
Matheus.
303
ORA-01994: GRANT failed: password file
missing or disabled
Quick tip:
KB:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/dba007.htm#ADMIN12478
# OBS 1
“If you are running multiple instances of Oracle Database using Oracle Real
Application Clusters, the environment variable for each instance should point to the
same password file.”
# OBS 2
REMOTE_LOGIN_PASSWORDFILE need to be in EXCLUSIVE to alter user with
sysdba.
# OBS 3
Users can be chacked on V$PWFILE_USERS.
# OBS 4
Entries represent the quantity of users on orapwd/with sysdba.
Matheus.
304
11.2.0.1: ORA-00600: internal error code,
arguments: [7005], [0], [], [], [], [], [], [], [], [],
[], []
# Error
#Cause
The query causing this error uses a CONTAINS clause on alphanumerical column
using bind variables. This is a perfect match with note ORA-0600 [7005] on a Select
Query Using Contains Clause (Doc ID 1176276.1) , referencing the unpublished Bug
8770557 ORA-600 [7005] While Running Text Queries .
The symptoms includes this two key factors:
– presence of CONTAINS clause
– use of bind variables
# Solution
Apply 11.2.0.2 patchset or higher, where this issue is fixed or Apply one off Patch
8770557 if available for your version / platform.
See ya!
Matheus.
305
ORA-00845: MEMORY_TARGET not
supported on this system (RHEL)
# Solution:
Make sure that /dev/shm is mounted. You can check this by typing df -k at the
command prompt. It will look something like this:
If you don’t find it then you will have to manually mount it as root user. The size should
be more than MEMORY_TARGET or MEMORY_MAX_TARGET.
For example, if the MEMORY_TARGET is less than 2 GB, you should make like that:
I recommend you add an entry in /etc/fstab so that the mount remains persistent even
after a reboot.
To make it, add the following entry in /etc/fstab:
Helped?
Share this post!
Matheus.
306
ORA-01153: an incompatible media recovery
is active
When trying to start or increase parallel of recover manager on datagauard (MRP):
I simply happen because there already are a process runnning, let’s check:
If you want to change it, it’s just stop it first and then start with the clauses you want:
SQL alter database recover managed standby database cancel; Database altered.
SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING
CURRENT LOGFILE DISCONNECT FROM SESSION; Database altered.
See ya!
Matheus.
307
308
Table of contents
Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
About the Blog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
GrepOra.com in 2016… . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
GrepOra Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
ADRCI Retention Policy and Ad-Hoc Purge Script for all Bases . . . . . . . . . . . . . . . . 12
High CPU usage by LMS and Node Evictions: Solved by Setting “_high_priority_processes” . 14
Application Looping Until Lock a Row with NOWAIT Clause . . . . . . . . . . . . . . . . . . 15
VKTM Hang – High CPU Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Oracle TPS: Evaluating Transaction per Second . . . . . . . . . . . . . . . . . . . . . . . . 20
Leap Second and Impact for Oracle Database . . . . . . . . . . . . . . . . . . . . . . . . . 22
HANGANALYZE Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
HANGANALYZE Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
ASHDUMP for Instance Crash/Hang ‘Post Mortem’ Analysis . . . . . . . . . . . . . . . . . 30
SYSTEMSTATE DUMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Upgrade your JDBC and JDK before Upgrade your Database to 12c Version! . . . . . . . . 36
Unplug/Plug PDB between different Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Database Migration/Move with RMAN: Are you sure nothing is missing? . . . . . . . . . . . 42
Vulnerability: Decrypting Oracle DBlink password (<11.2.0.2) . . . . . . . . . . . . . . . . . 43
Ordering Sequences over RAC – Hang on ‘DFS lock handle’ . . . . . . . . . . . . . . . . . 45
Infiniband Error: Cable is present on Port “X” but it is polling for peer port . . . . . . . . . . . 49
After adding Datafile in Primary the MRP Stopped in Physical Standby (Dataguard) . . . . . 52
Lock by DBLink – How to locate the remote session? . . . . . . . . . . . . . . . . . . . . . 55
Listing Sessions Connected by SID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
VPD: “row cache objects” latch contention . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Compilation Impact: Object Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
RAC on AIX: Network Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Grepping Entries from Alert.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Grepping Alert by Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Searching entries on Alert.log: A Better Way . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Alter (Fix) Oracle Database Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Explain ORA-XXX on SQL*Plus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Oracle Database Licensing: First Step! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Getting Oracle Parameters: Hidden and Unhidden . . . . . . . . . . . . . . . . . . . . . . . 71
Application Hangs: resmgr:become active . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
How to Prevent Automatic Database Startup . . . . . . . . . . . . . . . . . . . . . . . . . . 74
TFA – Collecting Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
ARCH Process Killed – Fix Without Restart . . . . . . . . . . . . . . . . . . . . . . . . . . 76
DBA_TAB_MODIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Oracle – Lost user’s password? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Scheduler Job by Node (RAC Database) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ORA-01950 On Insert but not on Create Table . . . . . . . . . . . . . . . . . . . . . . . . . 81
Adding datafile hang on “enq: TT – contention” . . . . . . . . . . . . . . . . . . . . . . . . 82
Quick guide about SRVCTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Saving database space with ASSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Flashback- Part 1 (Flashback Drop) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Flashback – Part 2 (Flashback Query) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Flashback- Part 3 (Flashback Versions Query) . . . . . . . . . . . . . . . . . . . . . . . . . 92
Flashback – Part 4 (Flashback Transaction Query) . . . . . . . . . . . . . . . . . . . . . . 94
Flashback – Part 5 (Flashback Table) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Flashback – Part 6 (Flashback Database) . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Flashback – Part 7 (Flashback Data Archive) . . . . . . . . . . . . . . . . . . . . . . . . . 103
Alert Log: “Private Strand Flush Not Complete” on Logfile Switch . . . . . . . . . . . . . . 106
TPS Chart on PL/SQL Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
PL/SQL Developer Taking 100% of Database CPU . . . . . . . . . . . . . . . . . . . . . . 110
Installing and Configuring ASMLIb on Oracle Linux 7 . . . . . . . . . . . . . . . . . . . . . 112
ASM: Adding disk “_DROPPED%” FORCE . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Adding ASM Disks on RHEL Cluster with Failgroups . . . . . . . . . . . . . . . . . . . . . 118
Manually Mounting ACFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Kludge: Mounting ACFS Thought Shellscript . . . . . . . . . . . . . . . . . . . . . . . . . 122
CRSCTL: AUTO_START of Cluster Services (ACFS) . . . . . . . . . . . . . . . . . . . . 123
Changing ACFS mount point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
ORA-27054: NFS file system where the file is created or resides is not mounted with correct
options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Error: Starting ACFS in RHEL 6 (Can’t exec “/usr/bin/lsb_release”) . . . . . . . . . . . . . 126
Create SPFILE on ASM from PFILE on Filesystem . . . . . . . . . . . . . . . . . . . . . . 127
ORA-15186: ASMLIB error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Charsets: Single-Byte vs Multibyte Encoding Scheme Issue . . . . . . . . . . . . . . . . . 129
Date Format in RMAN: Making better! . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Creating RMAN Backup Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
EXP Missing Tables on 11.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
DDBoost: sbtbackup: dd_rman_connect_to_backup_host failed . . . . . . . . . . . . . . . 134
EXP-00079 – Data Protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Backup Not Backuped Archivelogs and Delete Input . . . . . . . . . . . . . . . . . . . . . 136
How to list all my Oracle Products from Database park? . . . . . . . . . . . . . . . . . . . 137
How to list all my Oracle Products from Application park? . . . . . . . . . . . . . . . . . . 139
Service Detected on OEM but not in SRVCTL or SERVICE_NAMES Parameter? . . . . . . 141
Manipulating JMS queues using WLST Script . . . . . . . . . . . . . . . . . . . . . . . . 142
Decrypting WebLogic Datasource Password . . . . . . . . . . . . . . . . . . . . . . . . . 143
Setting up a weblogic Result cache on Oracle Service Bus . . . . . . . . . . . . . . . . . . 145
Avoiding lost messages in JDBC Persistent Store, when processing Global Transactions with
JMS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Reset the AdminServer Password in WebLogic 11g and 12c . . . . . . . . . . . . . . . . . 151
Configuration Coherence Server Out-of-Process in OSB 12C . . . . . . . . . . . . . . . . 152
WebLogic AdminServer Startup stopped at “Initializing self-tuning thread pool” . . . . . . . 155
Weblogic starting with the operating system . . . . . . . . . . . . . . . . . . . . . . . . . 156
WLST easeSyntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Quickly change Weblogic to Production Mode . . . . . . . . . . . . . . . . . . . . . . . . 158
Weblogic in debug mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Apache 2.4 with port redirect to Weblogic 12c . . . . . . . . . . . . . . . . . . . . . . . . 160
Oracle Licensing: Weblogic Tip! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Weblogic JRF files in /tmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Bypass user and password in the Oracle BAM ICommand. . . . . . . . . . . . . . . . . . . 164
Welcome to our book, our blog and our world to have some fun and
view/review/learn/laugh with some of our struggles and personal notes for
ourselves in the future.
Use it to view, learn and review some curiosities, tips and some useful stuff
for daily basis challenges and struggles on working with Oracle tech. But
mostly to have fun! This is a book written by Oracle geeks to Oracle geeks.
| GREP ORA
http://grepora.wordpress.com