You are on page 1of 5

Author : N Kandasamy

Date :
July/08/2015

How To diagnose the "Root Cause" of OPP (java) consuming High CPU
This document helps if you find OPP (java) process hangs or spinning on your CPU (100%) which results in performance issue.
This result in either concurrent requests are slow or end with errors/warnings due to unavailability of OPP.
To diagnose the "Root Cause" of OPP (java) to consume High CPU perform the below steps. This document helps us to identify
the report (Template Code) which could be the potential cause for OPP to hang. Once you identify the Template Code, You need to take
the corrective action to fix as suggested at the end of this document.

Steps To Collect The Required Details


1. Use top/prstat (find the analogous parameter for your platform) OS command to identify the pid of OPP (java). This is the
<pid> of the OPP Java process which consumes High CPU.
2. Generate the java thread dump. Use the <pid> from step1. This is required to write additional log details in the OPP log which
helps us to narrow down the potential cause.
$ kill -3 <pid>
3. Identify the relevant OPP file (Get absolute Path & file name using below command)
$ ps -ef | grep <pid>
4. Get the below details to get thread details. Use the <pid> from step1.
$ ps -eLo pid,ppid,tid,pcpu,comm | grep <<pid>
5. Once we know the thread id from step 4, we can find out the report which could be the potential Root Cause" of OPP (java)
to consume High CPU
Note: - Collect all the details in one go, to get complete picture.

DEMONSTRATION
STEP 1 (top/prstat)

STEP 2 (java thread dump)


This is required to write additional log details in the current OPP log. To know the name of the log file refer next step.
$ kill -3 25526

STEP 3 (Identify the OPP file name)


applmgr 25526
1 99 Jun25 ?
8-23:27:31 /u01/product/1013/appsutil/jdk/bin/java -DCLIENT_PROCESSID=25526 -server -Xmx384m -XX:NewRatio=2 -XX:+UseSerialGC -Doracle.apps.fnd.common.Pool.leak.mode=stderr:off verbose:gc -mx2048m -Ddbcfile=/u01/inst/apps/U01_my_apps_server/appl/fnd/12.0.0/secure/U01.dbc -Dcpid=488233 -Dconc_queue_id=6269 -Dqueue_appl_id=0 Dlogfile=/u01/applcsf/log/U01_my_apps_server/FNDOPP488233.txt -DLONG_RUNNING_JVM=true -DOVERRIDE_DBC=true -DFND_JDBC_BUFFER_MIN=1 -DFND_JDBC_BUFFER_MAX=2
oracle.apps.fnd.cp.gsf.GSMServiceController

STEP 4 (Getting thread details)


$ ps -eLo pid,ppid,tid,pcpu,comm | grep 25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526

1 25526
1 25544
1 25592
1 25600
1 25604
1 25610
1 25611
1 25612
1 25613
1 25614
1 25675
1 25676
1 25680
1 26005
1 26016
1 26019
1 26020
1 26021
1 26045
1 26046
1 26974

0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java
0.0 java

(Ignore these threads ID which are having 0.0)

25526 1 1289 71.2 java <======= One of The thread ID that is taking High CPU time
25526 1 27867 70.6 java <======= One of The thread ID that is taking High CPU time
25526
25526
25526
25526
25526
25526
25526
25526
25526
25526

1 18893 31.6 java


1 30698 31.6 java
1 10406 31.4 java
1 10408 0.0 java
1 6308 31.4 java
1 19621 30.8 java
1 26886 30.6 java
1 31297 26.1 java
1 27612 25.8 java
1 22336 26.1 java

Note: - Find the analogous command (ps -eLo pid,ppid,tid,pcpu,comm.) for your platform which gives similar output.

STEP 5 Identify the potential cause


1.
2.

3.
4.

From step 4 , get the thread id 1289.


Convert the thread id 1289 to Hexadecimal value using any tool.
a. For example
http://www.binaryhexconverter.com/decimal-to-hex-converter
i. Decimal Value ==> 1289
ii. Hexadecimal Value ==> 509
Open the OPP log file FNDOPP488233.txt that was identified in Step 3.
Do find for string "509" hexadeciamal value you just converted it.
"488233:RT6083684" daemon prio=10 tid=0x09f83000 nid= 0x509 runnable [0x6dbad000]
java.lang.Thread.State: RUNNABLE
at oracle.xdo.parser.v2.XSLTContext.reset(XSLTContext.java:346)
at oracle.xdo.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:285)
at oracle.xdo.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:155)
at oracle.xdo.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:192)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at oracle.apps.xdo.common.xml.XSLT10gR1.invokeProcessXSL(XSLT10gR1.java:677)
at oracle.apps.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:425)
at oracle.apps.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:244)
at oracle.apps.xdo.common.xml.XSLTWrapper.transform(XSLTWrapper.java:182)
at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:1044)
at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:997)
at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:212)
at oracle.apps.xdo.template.FOProcessor.createFO(FOProcessor.java:1665)
at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:975)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5978)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3470)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3559)
at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(XMLPublisherProcessor.java:305)
at oracle.apps.fnd.cp.opp.OPPRequestThread.run(OPPRequestThread.java:184)

5.
6.
7.

From the above output take out the string 6083684 which is the Concurrent Request ID.
You can now search for 6083684 in OPP log (i.e. FNDOPP488233.txt) again.
You should be able to locate the below lines which provides the Template code: CSTRINVR_XML which is the
"Root Cause" of OPP to hang or consume high CPU.
[6/29/15 10:28:18 AM] [OPPServiceThread1] Post-processing request 6083684.

6083684

[6/29/15 10:28:18 AM] [488233:RT


] Executing post-processing actions for request 6083684.
[6/29/15 10:28:18 AM] [488233:RT6083684] Starting XML Publisher post-processing action.
[6/29/15 10:28:18 AM] [488233:RT6083684]
Template code: CSTRINVR_XML
Template app: BOM
Language:
en
Territory: US
Output type: PDF

8.

If required, pickup another thread ID (STEP 4) which consumes high CPU, and perform the diagnose the similar
way (step 1-7) that helps to identify another Template code: CSTRAIVR_XML
[7/1/15 4:09:29 PM] [488233:RT6124449] Starting XML Publisher post-processing action.
[7/1/15 4:09:29 PM] [488233:RT6124449]
Template code: CSTRAIVR_XML
Template app: BOM
Language:
en
Territory: US
Output type: RTF

Possible Action to fix :The Template code that you have identified could be either Standard (provided by Oracle) or Custom (designed by you).
1.
2.

Standard Code
Custom Code

Standard Code- Tuning


I)

II)

Ensure you applied all relevant XML patches as per (Doc ID 1138602.1). This helps to improve the performance of
both Standard as well as Custom Code.
As per the example given in the guide, its understood that CSTRINVR_XML & CSTRAIVR_XML are the "Root Cause"
of OPP (java) to consume high CPU. Now it is time to look for known solutions.
Since CSTRINVR_XML & CSTRAIVR_XML are Standard reports, you may search for known solution in My Oracle
Support Knowledge Base. If you do not find, you may seek help from Oracle Support.
But in this case, you may find below notes (when you search Knowledge Base) , and you should review them to
progress further.
1. For CSTRINVR_XML, you need to review
Intransit Value Report (XML Version) Slow Performance After Patch 14365559:R12.BOM.C (Doc ID 1626110.1)

2. For CSTRAIVR_XML, you need to review


CSTRAIVR_XML All Inventories Value Report (XML) Completes In Warning: The concurrent manager has timed out waiting for
the Output Post-processor to finish (Doc ID 1524297.1)

Note: - It is recommended to take respective Support team (i.e. BOM, INV...) to consult before applying any patch (or)
changes, if you are not sure.
Custom Code - Tuning
1. Firstly, You may stop running the program that is using this template code for a day or so. This is to monitor & re-confirm if
the OPP hangs due to this concurrent request.
2. As it is a Custom Template code, you need to tune it yourself. Due to the nature of the customization, guidance for it may be
limited per (Doc ID 122452.1). However, you may use the below tips.
I.
II.

Ensure you applied available Patches for Oracle XML Publisher embedded in the Oracle E-Business Suite (Doc ID 1138602.1)
To optimize RTF template, ensure the rtf is made simple.
a. Use tables to control precisely where field data will be placed in the report.
b. Push expensive joins to the database Level instead performing at template level.
c. Many calculations are better performed in the data model.
d. Sorting large data sets is typically better performed by the database (indices, efficient disk sorting..)
e. Checking Data already sorted option in the Table Wizard will make use of group-adjacent which will not
re-sort the data.
f. Dont overcomplicate your templates. Use sub templates for re-use and complex code.
g. Use the full relative path for large data sets. i.e.
Instead of <?for-each: DEPT?>
use
<?for-each:/DEPT_SALS/DEPT?>.

III.
IV.

You may re-write the RTF afresh if possible. Sometimes re-write could be an easier option than tuning the existing one.
Also review Section 5 (Common RTF Optimization issues) of Note 1410160.1 - R12: Troubleshooting Known XML Publisher and EBusiness Suite (EBS) Integration Issues.

Conclusion
To resolve high CPU consumption by OPP, you need to collect all the above required details to isolate/identify the report
(Template Code) that caused the issue. Once you identify the Template Code you need to take corrective action to fix it. If its a
standard Template code, Please engage the respective Support team (i.e. BOM, INV...). If its a custom Template code, you need
to tune it yourself using above guidance.

You might also like