You are on page 1of 28

http://www.toadworld.com/Education/StevenFeuersteinsPLSQLExperience/QuickandUsefulCodeQusefuls/Qus eful1/tabid/180/Default.aspx Quseful #1: Trace argument passing What's the point?

A very good habit to develop is to add application-level tracing to your code. This tracing capability should be something you can turn on and off from outside your application code. Use it to keep track of application-specific information that is being used or processed by your code. Options for tracing include: The DBMS_APPLICATION_INFO package The log4plsql open source utility

The qd_runtime package of the Quest Codegen Utility Whichever tracing option you choose, information that very often needs to be traced/verified are the values that were passed into a program through the parameter list (IN and IN OUT arguments). Programmers usually write such tracing code themselves, which can be a major pain in the neck when you have lots of arguments. Below you will find a program that extracts the arguments of a program from the ALL_ARGUMENTS data dictionary view and generates a starting point for your argument tracing. It assumes that your tracing function takes two string inputs: the context: defaulted to the name of the program you provided the trace message: a single string that contains concatenations of all the arguments in the form ARGUMENT_NAME = ARGUMENT_VALUE. You may well need to change either this program or the output from the program to match your actual tracing interface or to remove any arguments whose type does not allow implicit conversions to strings. At least it will have generated a nice starting point for you, saving a bunch of time. Note: many thanks to Jornica for his help in enhancing and testing this utility! Show me the code! Sorry, rather than show you all the code here (very clumsy), I offer the source code and any supporting files in this zip file. You can also download my entire "demo zip", containing all the scripts and reusable code that are part of my regular trainings. The zip for this Quseful is inside that zip as well. How do I use it? Call gen_trace_call and pass it the name of the program for which you need argument tracing. It then queries the correct argument information from ALL_ARGUMENTS and generates code by displaying it on the screen with DBMS_OUTPUT. This program has five arguments: pkg_or_prog_in - The name of the package that contains the subprogram you want to trace, or the name of the schema-level function or procedure you want to trace. pkg_subprog_in - If tracing a program in a package, provide the name of that function or procedure here. If tracing a schema-level program, pass NULL. nest_tracing_in - Pass TRUE (the default) if you want to nest your tracing call within a conditional statement that is used to first see whether or not tracing is enabled. This is useful for reducing the runtime overhead of tracing when disabled.

tracing_enabled_func_in - The name of the function (or chunk of code) that you want to run to see if tracing is enabled. The default is 'qd_runtime.trace_enabled', which is the function used by the Quest CodeGen Utility to check to see if tracing is enabled.

trace_func_in - The name of the function that you want to call to do the tracing. The default is 'qd_runtime.trace', which is the trace function offered by the Quest CodeGen Utility. Examples Here are some examples, generating code for programs that are available in my "demo zip" file. Please note that I have formatted all code using Toad's auto-formatter. It will not be quite as pretty "out of the box". I have also turned on serveroutput before running these scripts. 1. A schema-level function (betwnstr.sf): DECLARE /* AFTER ENTERING - IN and IN OUT argument tracing */ PROCEDURE trace_in_arguments IS FUNCTION bool_to_char (bool_in IN BOOLEAN) RETURN VARCHAR2 IS BEGIN IF bool_in THEN RETURN 'TRUE'; ELSIF NOT bool_in THEN RETURN 'FALSE'; ELSE RETURN 'NULL'; END IF; END bool_to_char; BEGIN IF qd_runtime.trace_enabled THEN qd_runtime.TRACE ('BETWNSTR' , 'STRING_IN=' || string_in || ' - START_IN=' || start_in || ' - END_IN=' || end_in || ' - INCLUSIVE_IN=' || bool_to_char (inclusive_in) ); END IF; END trace_in_arguments; /* BEFORE LEAVING - OUT and IN OUT argument tracing */ PROCEDURE trace_out_arguments IS BEGIN IF qd_runtime.trace_enabled THEN qd_runtime.TRACE ('BETWNSTR', 'RETURN_VALUE=' || return_value); END IF; END trace_out_arguments; BEGIN NULL; END; You will find an example of betwnstr that includes this tracing logic in it in the betwnstr_with_tracing.sf file.

2. A function inside a package (dyn_placeholder.pks/pkb): DECLARE /* AFTER ENTERING - IN and IN OUT argument tracing */ PROCEDURE trace_in_arguments IS FUNCTION bool_to_char (bool_in IN BOOLEAN) RETURN VARCHAR2 IS BEGIN IF bool_in THEN RETURN 'TRUE'; ELSIF NOT bool_in THEN RETURN 'FALSE'; ELSE RETURN 'NULL'; END IF; END bool_to_char; BEGIN IF qd_runtime.trace_enabled THEN qd_runtime.TRACE ('DYN_PLACEHOLDER.ALL_IN_STRING' , 'STRING_IN=' || string_in || ' - DYN_PLSQL_IN=' || bool_to_char (dyn_plsql_in) ); END IF; END trace_in_arguments; /* BEFORE LEAVING - OUT and IN OUT argument tracing */ PROCEDURE trace_out_arguments IS BEGIN IF qd_runtime.trace_enabled THEN qd_runtime.TRACE ('DYN_PLACEHOLDER.ALL_IN_STRING' , 'RETURN_VALUE=' || return_value ); END IF; END trace_out_arguments; BEGIN NULL; END; 3. Same function, but using overrides for the tracing programs: DECLARE /* AFTER ENTERING - IN and IN OUT argument tracing */ PROCEDURE trace_in_arguments IS FUNCTION bool_to_char (bool_in IN BOOLEAN) RETURN VARCHAR2 IS BEGIN IF bool_in THEN RETURN 'TRUE'; ELSIF NOT bool_in THEN RETURN 'FALSE'; ELSE

RETURN 'NULL'; END IF; END bool_to_char; BEGIN IF mypkg.tracing_on () THEN mupkg.show_action ('DYN_PLACEHOLDER.ALL_IN_STRING' , 'STRING_IN=' || string_in || ' - DYN_PLSQL_IN=' || bool_to_char (dyn_plsql_in) ); END IF; END trace_in_arguments; /* BEFORE LEAVING - OUT and IN OUT argument tracing */ PROCEDURE trace_out_arguments IS BEGIN IF mypkg.tracing_on () THEN mupkg.show_action ('DYN_PLACEHOLDER.ALL_IN_STRING' , 'RETURN_VALUE=' || return_value ); END IF; END trace_out_arguments; BEGIN NULL; END; Gotchas Keep the following in mind: You have serveroutput turned on to see the output from this program. If your parameter list contains complex datatypes, like records and collections, you will definitely need to modify the output before it will work. It will only generate trace information for programs defrined in the current schema. You can add a schema argument to the program and change the user_arguments reference to all_arguments to generate code for programs in other schemas. On Oracle9i, you will still be facing a limit of 255 characters in a call to DBMS_OUTPUT.PUT_LINE (rises to 32K in 10g and above). You can avoid this issue by substituting the call to DBMS_OUTPUT.PUT_LINE with a program that works around this issue, a number of which are available in my "demo zip", including the pl.sp procedure and the p.pks/pkb package.

Quseful #2: The String Tracker Package What's the point? Sometimes you need to be able keep track of strings (names of some sort, usually) that you have used, so you do not use them again. I ran into this need, in fact, when I was building some backend code for Quest Code Tester for Oracle. We generate test code (a PL/SQL package) for the tests you described through the UI. That generated code includes declarations of variables. I can't declare a variable with the same name more than once. So I need to remember what I previously declared. To do that I built the qu_used package, which evolved into the string_tracker package. The package requires Oracle Database 9i Release 2 and above, since it takes advantage of string-indexed collections. It is, I believe, an excellent demonstration of the elegance possible in one's code through the use of this structure. I hope you can get as much value out of this package as I have.

Show me the code! Sorry, rather than show you all the code here (very clumsy), I offer the source code and any supporting files in this zip file. You can also download my entire "demo zip", containing all the scripts and reusable code that are part of my regular trainings. The zip for this Quseful is inside that zip as well. Here are the files in the Quseful2.zip: string_tracker3.pks - the lastest and greatest specification of the string_tracker package string_tracker3.pkb - the lastest and greatest body of the string_tracker package

string_tracker.sql - a demonstration of using this code (also found below) q##STRING_TRACKER.qut - a Quest Code Tester export of the test definition I built to veirfy that string_tracker works. You can import this into an installation of Code Tester and confirm for yourself that string_tracker works as advertised. How do I use it? The package contains several programs: string_tracker.clear_all_lists Deletes all lists you may have defined in string_tracker in your session. string_tracker.clear_list Deletes just the list specified in the call to clear_list. string_tracker.create_list Creates a new list. Provide the name of the list, whether or not you want the strings in the list to be case-sensitive, and if you want to overwrite a list that already exists with this name. string_tracker.mark_as_used Mark the specified string as "used" in the specified list.

string_tracker.string_in_use If the specified string is currently "used" in the specified list, return TRUE. Otherwise, return FALSE. Examples Here is an example of using string_tracker that mimics my own actual application of this package inside Code Tester. I have a collection of outcomes (the tests I will perform after I test is run). For each outcome, I need to declare a local variable to hold the data. Since I can create more than one outcome for a particular OUT argument, I must be sure to avoid duplicate declarations. DECLARE /* Create a constant with the list name to avoid multiple, hard-coded references. Notice the use of the subtype declared in the string_tracker package to declare the list name. */ c_list_name CONSTANT string_tracker.list_name_t := 'outcomes'; /* QCGU: A collection based on a %ROWTYPE associative array type */ l_outcomes qu_outcome_tp.qu_outcome_tc; BEGIN /* Create the list, wiping out anything that was there before. */ string_tracker.create_list (list_name_in => c_list_name , case_sensitive_in => FALSE , overwite_in => TRUE ); /* QCGU: get all the outcome rows for the specified test case. */ l_outcomes := qu_outcome_qp.ar_fk_outcome_case (l_my_test_case);

/* For each outcome... */ FOR indx IN 1 .. l_outcomes.COUNT LOOP /* IF the string has not already been used... */ IF NOT string_tracker.string_in_use (c_list_name , l_outcomes (indx).variable_name ) THEN /* Add the declaration to the test package. */ generate_declaration (l_outcomes (indx)); /* Make sure I don't generate duplicate declarations. */ string_tracker.mark_as_used (c_list_name , l_outcomes (indx).variable_name ); END IF; END LOOP; /* Clean up! */ string_tracker.clear_list (c_list_name); END; Gotchas Keep the following in mind: With string_tracker, you can keep track of multiple (approximately 4.3 billion) of lists of used strings. Each list may contain approximately 4.3 billion strings in it. | These lists only persist for the duration of your session (they are stored in package variables), and they consume PGA memory. The package requires Oracle Database 9i Release 2 and above, since it takes advantage of stringindexed collections. Quseful #3: Don't put COMMIT; in your code! What's the point? First, here is my recommendation to you: Never call COMMIT or ROLLBACK; directly in your code. Instead, call a program that will do the commit for you, and design that program so you can dynamically turn commits/rollbacks on and off, without changing the application code. Why would I say that? Because a COMMIT; in your code is an example of hard-coding, and as we all know, hard-coding is bad. "Hard-coding?" you ask, "What is Steven talking about?" Everyone knows what hard-coding is: you put a literal value directly in your code instead of "hiding it" behind a variable, constant or function name. As in: IF l_employee.salary > 10000000 THEN must_be_ceo (); END IF; And we all know that this is a bad thing to do; even if it doesn't seem as though that literal value could ever change, we know that it will, and then we have to track down every occurrence of the number and change that. So that's the easy part of hard-coding. The hard part is recognizing all the different kinds of hardcoding that can appear in your code. For example, I suggest to you that every time you write COMMIT; or BOLLBACK; in your code, you have hard-coded the transaction boundary. That is, once you commit, you cannot undo your changes. And once you rollback, those changes are gone. Lost forever. "Well, duh!" you are likely thinking. "That's the whole point of those statements. Now you are just being silly." Not at all. This is one of those situations that seem so clear at first glance, but upon closer inspection, one realizes that it is a bit more complicated. Suppose I have created a program to adjust the popularity ratings

of my company's products, partitioned by gender. The specifications for this program call for a commit, so I write the following: PROCEDURE adjust_ratings (gender_in IN VARCHAR2) IS BEGIN .... execute many queries and DML statements .... COMMIT; END adjust_ratings; It is then time to test my program (which I will do with Quest Code Tester for Oracle). I must write the code to set up the various tables on which the program depends (and to which it writes). Some of these tables have hundreds of thousands of rows of data, so it is not at all practical to load it from scratch each time. In fact, what really makes the most sense is to be able to run her program, look at the changes to the tables, and then (assuming something is still wrong) issue a rollback after running adjust_ratings to return the state of the data back to its starting point. No problem! I just go into my code and make this change: PROCEDURE adjust_ratings (gender_in IN VARCHAR2) IS BEGIN .... execute many queries and DML statements .... -- Don't commit while testing COMMIT; END adjust_ratings; Now I can run my tests, rollback, run some more tests, without having to go through an elaborate, timeconsuming setup process. And when I have fully tested the program and is sure it works? I change the program back to its original state: PROCEDURE adjust_ratings (gender_in IN VARCHAR2) IS BEGIN .... execute many queries and DML statements .... COMMIT; END adjust_ratings; So let's recap those steps: 1. Write program. 2. Modify program for testing. 3. Test program until you are sure it works. 4. Then change the program. What's wrong with this picture? You are not supposed to change your code after you finish testing! Sure, it's not a big deal to comment out and in the COMMIT; statement, but what if there are dozens of such statements in your code? How will you make sure that have changed them all? Oh and as for "commit as hard-coding," do you see now what I mean? It seems so unambiguous at first, but once we look at the requirements for testing one's code, that inflexible transaction boundary becomes an obstacle. Sometimes we want the commit to take place, but at other times, we'd really rather it didn't do the commit. So what should you do, instead? Call a program to do the committing for you. I have written such a program, in the my_commit package. Show me the code! Sorry, rather than show you all the code here (very clumsy), I offer the source code and any supporting files in this zip file. You can also download my entire "demo zip", containing all the scripts and reusable code that are part of my regular trainings. The zip for this Quseful is inside that zip as well. Here are the files in the Quseful3.zip: my_commit.pks - the my_commit package specification my_commit.pkb - the my_commit package body

Q##MY_COMMIT.qut - a Quest Code Tester test definition export that you can import this into an installation of Code Tester, in order to confirm for yourself that my_commit works as advertised. How do I use it? To take advantage of my_commit, I would change my procedure as follows: PROCEDURE adjust_ratings (gender_in IN VARCHAR2) IS BEGIN .... execute many queries and DML statements .... my_commit.perform_commit (); END adjust_ratings; By default, committing is enabled, and perform_commit will do the commit; here is the implementation of this utility: PROCEDUREperform_commit(context_inINVARCHAR2:=NULL) IS BEGIN trace_action('perform_commit', context_in); IFcommitting() THEN COMMIT; ENDIF; END; It contains a built-in tracing facility that you can turn on to "watch" commits. But the main thing is the conditional statement that only commits when the package setting is enabled. So when I test my code, I can disable saving and then run the program. Shown below are the steps inside SQL*Plus. Check out the test definition export in the download zip to see how this is done in Quest Code Tester setup logic. SQL> exec my_commit.turn_off SQL> exec adjust_ratings ('MALE') And after I am done analyzing the results, I can simply rollback and test again. Quseful #4: Get the value of (almost) any column from any table with dynamic SQL I offer in this Quseful (Quick and Useful) a package that you can use to dynamically retrieve the value of almost any column from any table. I created this package as a "helper" utility for Quest Code Tester users. Here's the problem that I was solving with this package: We added support for automated testing of XML documents in Quest Code Tester 1.6, which will be released in a month or so (a very solid beta is available at http://unittest.inside.quest.com/beta.jspa). So if you have a function that returns an XML document or a procedure that has an OUT XML document, you can very easily specify a test of that XML document through the Expected Results Properties window:

And this is all great, except we noticed one problem: if your XML document is stored in a column in a table, then you cannot easily point to that column and say "Please test the contents of that column in this row." We plan to make possible in the future direct testing of a column in a table through the user interface. In the meantime, though, I decided to build a backend API to allow developers to easily test their column value. It wasn't too hard, because we built lots of customizability (a word?) into Quest Code Tester from the very start. So for this particular situation, you can ask to test an expression and then choose XMLType as your type of expression:

But then you need to write a chunk of PL/SQL code to return the column's value. So I wrote the dynamic query function in the qu_helper package to make it easy to do just that.

Here is a call to the xml_column_value function to retrive an XMLType column value:

Now, that seems generally useful, so I moved that code from qu_helper into the dyn_column_value package and I offer it to you! Show me the code! Rather than show you all the code here (poor use of blog real estate), I offer the source code and any supporting files in this zip file. You can also download my entire "demo zip", containing all the scripts and reusable code that are part of my regular trainings. The zip for this Quseful is inside that zip as well. Here are the files in the Quseful4.zip: dyn_column_value.pkg the package itself Q##DYN_COLUMN_VALUE.qut a Quest Code Tester test definition export that exercises some of the programs in the package to verify its correctness How do I use it? Each of the retrieval functions has the same four arguments: Argument Name owner_in table_in column_in where_in raise_ndf_in Significance The name of the owner of the table The name of the table The name of the column The where clause that should identify a single row If you pass TRUE, then this function will raised NO_DATA_FOUND if no row is found for the where clause specified. Otherwise, NULL is returned.

If other errors are raised, they are propagated out with RAISE_APPLICATION_ERROR. Here is an example: BEGIN my_salary := dyn_column_pkg (USER , 'EMPLOYEES', 'SALARY', , 'LAST_NAME = ''FEUERSTEIN'''); END; Quseful #5: Does that string contain a valid number? I offer in this Quseful (Quick and Useful) a package that you can use to determine if a string contains a valid integer, number, binary_float or binary_double (note: if you are not running Oracle 10g, you will need to comment out the binary_* versions in this package). It is based on code I wrote about back in 1997

(available here, along with the article I wrote about this topic, originally published...um....I am not sure where). So you are now asking yourself: "Why the heck is Steven dredging up this dusty, old content?" The answer could be that Steven has run out of new things to write about, but that's not quite true. The real answer is that I visited everyone's favorite search engine the other day and searched for "PL/SQL Test". I found and followed a link to techonthenet.com, which offered a tip on how to test to see if a string was a valid number. The online help topic is found here: http://www.techonthenet.com/oracle/questions/isnumeric.php and it suggests that, in brief, you use the TRANSLATE function to get the job done. That is exactly what I talked about not doing way back in 1997, so I thought I would offer this package to make sure anyone who needed it would have a good implementation. Show me the code! Download all the source code, plus the old article, and a Quest Code Tester test definition export file from this zip file. Here is the basic idea behind the "is it a valid number?" algorithm: Why not let Oracle do the "heavy lifting"? After all, it's not easy to determine if a string is a valid number. There are so many forms a number can take. And over time, Oracle could add support for other ways of specifying numbers. If I write an algorithm myself, I have to keep it up to date. Yuck. So instead of doing that I will simply call Oracle's built-in TO_NUMBER function (or the appropriate variant for other datatypes); If that program doesn't raise an exception trying to convert the string to a number, well then, it must be a valid number! Here's the code for one of the functions: CREATE OR REPLACE PACKAGE BODY string_is IS FUNCTION valid_number (string_in IN VARCHAR2) RETURN BOOLEAN IS l_dummy NUMBER; l_is_number BOOLEAN DEFAULT FALSE; BEGIN IF string_in IS NOT NULL THEN l_dummy := TO_NUMBER (string_in); l_is_number := TRUE; END IF; RETURN l_is_number; EXCEPTION WHEN OTHERS THEN RETURN FALSE; END valid_number; How do I use it? It's pretty straightforward. Just pass any of the functions a string and it will return TRUE or FALSE, as in: BEGIN IF string_is.valid_integer (string_in) THEN ... use it as an integer ELSIF string_is.valid_number (string_in)

THEN ... use it as a number And so on. Clearly, an integer will return TRUE for both "valid_integer" and "valid_number", so if you want to distinguish between the two, you will need to test for integer first.

Quseful #6: Generate Collections of Random Values You will find in this Quseful a package that will generate/return collections of random values of strings, numbers and dates. It also contains a "self-test" random_verifier procedure that you can run to verify "at a glance" that the values being generated seem random. As a bonus, I include the pick_winners_randomly procedure, which I use in my seminars to pick the winners in raffles for my books and other goodies. I wrote this package in August 2007 so that I could implement automatic boundary condition test generators for Quest Code Tester for Oracle. For example, if I have a function that accepts a string and a number and returns a date, then I would like to verify that if I pass in NULL for the string, then no matter what value I pass in for the number, my function always returns NULL. To do this, I need to generate a random set of values for the number. Show me the code! I certainly won't show you all the code in this entry. Download full source code from this zip file. I can very quickly show you, however, that to generate random values in PL/SQL, you will take advantage of the DBMS_RANDOM package. You will find a very thorough explanation of this package and how to use it in my (and Arup Nanda's) book Oracle PL/SQL for DBAs. Do check it out and buy a copy or two! Briefly (and to simplify matters a bit), you can call one of these two programs: DBMS_RANDOM.VALUE return a random integer within the specified range DBMS_RANDOM.STRING return a random string of the specified type and length. You can then also combine them in various ways. For example, if I want to generate random strings with random lengths, I can do something like this: FUNCTION random_string ( min_length_in IN PLS_INTEGER DEFAULT 1 , max_length_in IN PLS_INTEGER DEFAULT 100 , string_type_in IN VARCHAR2 DEFAULT NULL ) RETURN maxvarchar2_aat IS BEGIN RETURN DBMS_RANDOM.STRING ( string_type_in , DBMS_RANDOM.VALUE (min_length_in, max_length_in) ); END random_string; But that returns just a single string. In this package, you will find programs that return a set of random values of the specified types as an associative array. Note 1: You may want to convert this package to return nested tables so that you can call the random value generators inside a SELECT statement. Note 2: Check out the way that I use a string-indexed collection to easily make sure that random values are unique. How do I use it? To make the functions as useful as possible, I provide arguments that allow you to specify the number of

random values you need, whether or not you require distinct values (no repeats), and the range over which the values can vary. When generating strings, you can also specify the type of string you desire. Oracle offers these options: u - uppercase l - lowercase a - mixed case x - mix of uppercase and digits p - any printable character So in the following block of code, I obtain a list of 500 random strings with a minimum of 6 characters and a maximum of 20 characters, using a mix of uppercase and digits: DECLARE l_strings randomizer.maxvarchar2_aat; BEGIN l_strings := randomizer.random_strings (count_in => 500 , min_length_in => 3 , max_length_in => 20 , string_type_in => 'x' , distinct_values_in => TRUE ); FOR indx IN 1 .. l_strings.COUNT LOOP DBMS_OUTPUT.put_line (l_strings (indx)); END LOOP; END; Try it out I think you will agree these are pretty random-looking strings!

Quseful #7: Kill Those Infinite Loops! What's the point? I don't know about you, but I sometimes write code that (inadvertently, not on purpose) contains an infinite loop. So I run my program and Toad goes off into never-never land, with Oracle chewing up CPU cycles so intently that it is hard to connect as SYS and kill the session. I hate that, don't you? Now, there are two ways to address this problem: 1. Don't write code that contains infinite loops. Well, DUH! Of course not. I never want to do this intentionally, but of course the world (even the world of my code) does not always match my intentions. 2. Insert "killer logic" into the loop that forces termination of the loop after an excessive number of variations. I wrote a package (loop_killer) that makes it is easy to do precisely this. Show me the code! I certainly won't show you all the code in this entry. Download full source code from this zip file. Here, however, is the specification of the package: CREATE OR REPLACE PACKAGE loop_killer /* | File name: loop_killer.pkg | | Overview: Simple API to make it easier to insert code inside a loop | to check for infinite or out of control loops and kill | them after N iterations. | | Raises the infinite_loop_detected exception.

| | Author(s): Steven Feuerstein | | Modification History: | Date Who What | 23-AUG-2007 SF Created package */ IS e_infinite_loop_detected EXCEPTION; c_infinite_loop_detected PLS_INTEGER := -20999; PRAGMA EXCEPTION_INIT (e_infinite_loop_detected, -20999); PROCEDURE kill_after (max_iterations_in IN PLS_INTEGER); PROCEDURE increment_or_kill (by_in IN PLS_INTEGER DEFAULT 1); FUNCTION current_count RETURN PLS_INTEGER; END loop_killer; How do I use it? The loop killer package is very straightforward: loop_killer.kill_after: a subprogram that tells the utility the limit of iterations after which the loop should be terminated. You call this program before you start the loop. It sets the "kill after" limit and also sets the internal counter to 1. loop_killer.increment_or_kill: call this subprogram inside your loop. It will either increment the counter or kill the loop if the increment has met the "kill after" value you provided earlier.

loop_killer.current_count: returns the current count in the iterations. Things to keep in mind: The loop is terminated by raising the loop_killer.e_infinite_loop_detected exception, which has the error code -20,999. You will also see a message displayed on your screen, as you will see below in the example. Here is an example of using loop_killer to terminate a truly infinite loop:

Here's the DBMS_OUTPUT text from the termination: Loop killer failure: Your loop exceeded 100 iterations. Call stack below shows location of problem: ----- PL/SQL Call Stack ----object line object handle number name

26D69C20 29 package body QCTO1600_NEW.LOOP_KILLER 26F1C63C 6 anonymous block Here's the code, in case you want to try it yourself: BEGIN loop_killer.kill_after (100); LOOP DBMS_OUTPUT.put_line (loop_killer.current_count); loop_killer.increment_or_kill; END LOOP; END; / Quseful #8: Execute DDL statements from a file What's the point? This utility will make it easy for you to read in the contents of DDL statements (like CREATE OR REPLACE PACKAGE) and execute them within Oracle. Show me the code! It's not a terribly long program, so I will include it right in this posting: CREATE OR REPLACE PROCEDURE exec_ddl_from_file ( dir_in IN VARCHAR2 , file_in IN VARCHAR2 ) AUTHID CURRENT_USER IS PRAGMA AUTONOMOUS_TRANSACTION; -l_cur PLS_INTEGER := DBMS_SQL.open_cursor; l_file UTL_FILE.file_type; l_dummy PLS_INTEGER; l_start PLS_INTEGER; l_end PLS_INTEGER; -- Use DBMS_SQL.varchar2s if Oracle version is earlier -- than Oracle Database 10g Release 10.1. l_lines DBMS_SQL.varchar2a; -- 32767 chars per line --l_lines DBMS_SQL.varchar2s; -- 255 chars per line PROCEDURE read_file (lines_out IN OUT DBMS_SQL.varchar2a) IS BEGIN l_file := UTL_FILE.fopen (dir_in, file_in, 'R'); LOOP UTL_FILE.get_line(l_file , lines_out (lines_out.COUNT + 1)); ENDLOOP; EXCEPTION -- Reached end of file. WHEN NO_DATA_FOUND THEN -- Strip off trailing /. It will cause compile problems. IF RTRIM (lines_out (lines_out.LAST)) = '/' THEN lines_out.DELETE (lines_out.LAST); END IF; UTL_FILE.fclose (l_file); END read_file; BEGIN read_file (l_lines); l_start := 1;

WHILE (l_lines.COUNT > 0) LOOP -- get next set of lines up to / all by itself. l_end := l_start; WHILE (l_lines (l_end) <> '/') LOOP l_end := l_end + 1; END LOOP; DBMS_OUTPUT.put_line ( 'parse from lines ' || l_start || ' to ' || l_end); -- Do not include the / symbol. DBMS_SQL.parse ( l_cur , l_lines , l_start , l_end-1 , TRUE , DBMS_SQL.native ); l_dummy := DBMS_SQL.EXECUTE (l_cur); --- You can even determine the type of statement executed -- by calling the DBMS_SQL.last_sql_function_code function -- immediately after you execute the statement. Check the -- Oracle Call Interface Programmer's Guide for an explanation -- of the codes returned. DBMS_OUTPUT.put_line ( 'Type of statement executed: ' || DBMS_SQL.last_sql_function_code () ); l_start := l_end + 1; l_lines.DELETE (l_start, l_end); END LOOP; DBMS_SQL.close_cursor (l_cur); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ( 'Compile from ' || file_in || ' failed!'); DBMS_OUTPUT.put_line (DBMS_UTILITY.format_error_stack); DBMS_OUTPUT.put_line (DBMS_UTILITY.format_error_backtrace); IF UTL_FILE.is_open (l_file) THEN UTL_FILE.fclose (l_file); END IF; IF DBMS_SQL.is_open (l_cur) THEN DBMS_SQL.close_cursor (l_cur); END IF; END exec_ddl_from_file; / GRANT EXECUTE ON exec_ddl_from_file TO PUBLIC / CREATE PUBLIC SYNONYM exec_ddl_from_file FOR exec_ddl_from_file / How do I use it? Easy enough! Just pass it the location of the file and its name. Remember, though, that this program uses UTL_FILE, so the location of the file must either be specified as a valid diretory in the database's UTL_FILE_DIR parametr, or you must specify a database directory name on which you have read authority.

Some things to keep in mind: The program is compiled as AUTHID CURRENT_USER, which means that the DDL statements will be executed within the schema that is currently connected, and not the schema that owns this program. It is an autonomous transaction, so the implicit commit caused by the DDL statement execution will not save any other outstanding changes in your session. You can have multiple statements in your file and they will all be executed as long as there is a / character to terminate each statement. I read the contents of the file into a collection defined on the DBMS_SQL.VARCHAR2A type, which was introduced in Oracle Database 10g Release 1. If you are on an earlier version, use the DBMS_SQL.VARCHAR2S type instead. This program will only allow you to execute DDL statements, or DML statements that do not contain any placeholders for bind variables.

Quseful #9: Refactoring All files referenced in document available from: www.oracleplsqlprogramming.com/downloads/demo.zip. Introduction Very, very few of us write perfect programs the first time, or the second time, or. You get the idea. Our code is never perfect and can always be improved. Martin Fowler developed a technique he calls "refactoring," and it has become quite popular in the world of Java. Here is Mr. Fowler's description of refactoring: "Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. "It is a disciplined way to clean up code that minimizes the chances of introducing bugs. In essence when you refactor you are improving the design of the code after it has been written." To demonstrate this approach in my trainings, I put together the following document and associated code. I start with a very poorly designed program to compare two files for equality. I then build a regression test for that program (a critical step in effective refactoring) using Quest Code Tester. With that test in place, I go through a multi-step transformation of this program until it actually works and it has a much better structure. The explanations for each stage in the transformation are, I admit, a bit sketchy. Perhaps you will have the opportunity to attend one of my trainings in which I will do a full presentation of this process. I encourage you, however, to start with the initial implementation and Critique it: what's wrong with it? What could be improved? Go through my various iterations and make sure you understand what I did and perhaps even do it yourself. Or maybe follow your own progression from ugly code to perfect code (or at least, much better code). Give me feedback on my steps and my changes. Do you disagree with any of them? Do you have a suggestion for what I could do better? Before the First, Quick and Dirty Implementation I just started a new job as a PL/SQL developer. For my first task, I was told to "fix" the files_are_equal function. The person who I replaced had written it just before leaving. It is supposed to check to see whether two files contain the same contents. There was very little documentation about the program, but I was told that I should assume the following: The program compares the contents of text files only, not binary data. The directories and file names passed to the program cannot be null. The files may not exist at all and the files can be empty. If the second directory is NULL, then assume both files are located in the same directory. The each line in the files is NULL, then they will be considered to be the same (that is, NULL= NULL). The code I inherited

I took a look at the code and quickly realized that "quick and dirty" was probably an overstatement for all the thought that had gone into writing this program. Here is a rough summary (perhaps "intention" would be a better world to use) of the logic I found in this function: Pass in the "check this" and "against this" file information (file name and directory). Open both files, in read-only mode. Read the next line from each file. If they match, then go on to the next line. If they do not match, then stop. Close both files and return the result. If any error occurs, return FALSE. Quick and Dirty: the starting point for refactoring (38 lines) I start with a very poorly designed program to compare two files for equality. I then build a regression test for that program (a critical step in effective refactoring) using Quest Code Tester. With that test in place, I go through a multi-step transformation of this program until it actually works and it has a much better structure. CREATE OR REPLACE FUNCTION files_are_equal ( file1_name_in IN VARCHAR2, dir1_name_in IN VARCHAR2, file2_name_in IN VARCHAR2, dir2_name_in IN VARCHAR2 := NULL ) RETURN BOOLEAN IS v_file1id UTL_FILE.file_type; v_file1line VARCHAR2 (32767); -l_file2id UTL_FILE.file_type; l_file2line VARCHAR2 (32767); -identical_files BOOLEAN DEFAULT TRUE; BEGIN v_file1id := UTL_FILE.fopen (dir1_name_in, file1_name_in, 'R', 32767); l_file2id := UTL_FILE.fopen (NVL (dir2_name_in, dir1_name_in), file2_name_in, 'R', 32767 ); LOOP UTL_FILE.get_line (v_file1id, v_file1line); UTL_FILE.get_line (l_file2id, l_file2line); identical_files := v_file1line = l_file2line; END LOOP; UTL_FILE.fclose (v_file1id); UTL_FILE.fclose (l_file2id); RETURN identical_files; EXCEPTION WHEN OTHERS THEN RETURN FALSE; END files_are_equal; Critique of This Very Quick and Dirty Program Here are some of the things that bothered me about this program: There is no program header describing the author, copyright, modification history, general description, etc. It is an "anonymous" program. Inconsistent naming conventions for local variables. Some start with "l_" (which is, for me at least, the preferred convention) and others do not. Lots of hard-coded values: length of the line strings, the specification of the maximum length in the calls to FOPEN, the mode under which the file is opened.

Take a look at that loop: it appears to be an infinite loop. It doesn't contain any sort of EXIT statement or boundary condition to terminate the loop. Sure, the loop will stop when UTL_FILE.GET_LINE raises NO_DATA_FOUND, but really that is a scary-looking piece of code. The comparison logic is simplistic and buggy. At first glance, it makes some kind of sense, but delve deeper and this procedure is deeply flawed. In fact, this function will never return TRUE when the two files are the same. Can you see why? There are no comments to explain the basic logic and, in particular, the thinking behind the approach taken to making the comparison. Does it really make sense to simply return FALSE if any error has occurred? Certainly you could argue that two files cannot possibly be equal if an error occurred in the comparison, but usually you would not want to hide the fact that an error occurred. And one would at least distinguish between the NO_DATA_FOUND exception (raised by the UTL_FILE.GET_LINE procedure reading past the last line) and other, "real" errors. And if one those more serious errors does occur, I do not close the files. This might leave one or both files open, which could cause other problems in my application. WHAT OTHER ISSUES CAN YOU FIND? The Refactoring Sequence Major changes The original code for the files_are_equal implementation Establish a baseline regression test, so I can analyze the impact of my changes. I offer the following: Export of the Quest Code Tester test definition for files_are_equal. Procedure to run the Code Tester test in "batch" mode. Command line script to test each iteration of refactoring. Note: Quest Code Tester 1.6.1 and above is required. Create helper subprograms: initialize, cleanup and hide logic for comparison (it is more complicated than appears at first glance). Replace direct call to GET_LINE with a program that "hides" the NO_DATA_FOUND. Hmmm, that means we can no longer rely on NO_DATA_FOUND to stop the loop execution. !! Perhaps we should build in an emergency bail-out in case of infinite loops? Get rid of hard-coded values. Best to create a container for UTL_FILE constants. Address fundamental logic problems in the "check for equality" algorithm. Also, recognizing that one file could be shorter than the other, we should probably deal separate with the questions "Are the files the same? and "Should I stop the loop?" So the check_for_equality gets more inputs and gets much more complicated. We should not leave this stage until we get 100% success (green light) from our regression test. Bullet proof the code: now that the code seems to be working, let's think about validating assumptions, generally taking care of unusual scenarios, and improve error handling. Assertions for inputs Don't return FALSE for any exception. In fact at this point, NO_DATA_FOUND should not be raised by the program, so we may want to separate out this case. Now we see the need for another helper package of PL/SQL limitations. Name of file eqfiles_before_ref.sf

q##files_are_equal.qut q##files_are_equal.sp eqfiles.tst

eqfiles_helper_programs.sf loop_killer.pkg

eqfiles_no_literals.sf utl_file_constants.pkg

eqfiles_real_comparison.sf

eqfiles_bullet_proof.sf eqfiles_with_qem.sf plsql_limits.pks

Final cleanup: Consolidate multiple variables into single records, make the parameter passing more concise and consistent. Add program header Reorg code into helper programs - eqfiles_helper_programs.sf CREATE OR REPLACE FUNCTION files_are_equal ( file1_name_in IN VARCHAR2, dir1_name_in IN VARCHAR2, file2_name_in IN VARCHAR2, dir2_name_in IN VARCHAR2 := NULL ) RETURN BOOLEAN IS l_file1id UTL_FILE.file_type; l_file1line VARCHAR2 (32767); l_file1eof BOOLEAN; -l_file2id UTL_FILE.file_type; l_file2line VARCHAR2 (32767); l_file2eof BOOLEAN; -l_identical BOOLEAN DEFAULT TRUE; PROCEDURE initialize IS BEGIN l_file1id := UTL_FILE.fopen (dir1_name_in, file1_name_in, 'R', 32767); l_file2id := UTL_FILE.fopen (NVL (dir2_name_in, dir1_name_in), file2_name_in, 'R', 32767 ); END initialize; PROCEDURE cleanup IS BEGIN UTL_FILE.fclose (l_file1id); UTL_FILE.fclose (l_file2id); END cleanup; /* Avoid direct call to GET_LINE becausee it leads to poorly structured code (application logic in the exception section). Instead, trap the NO_DATA_FOUND exception and return a Boolean flag. */ PROCEDURE get_next_line_from_file ( file_in IN UTL_FILE.file_type, line_out OUT VARCHAR2, eof_out OUT BOOLEAN ) IS BEGIN UTL_FILE.get_line (file_in, line_out); eof_out := FALSE; EXCEPTION WHEN NO_DATA_FOUND THEN line_out := NULL; eof_out := TRUE; END get_next_line_from_file;

eqfiles_moving_parts.sf

PROCEDURE check_for_equality ( line1_in IN VARCHAR2, line2_in IN VARCHAR2, identical_out OUT BOOLEAN ) IS BEGIN identical_out := line1_in = line2_in; END check_for_equality; BEGIN initialize; WHILE (l_identical AND NOT l_file1eof AND NOT l_file2eof) LOOP get_next_line_from_file (l_file1id, l_file1line, l_file1eof); get_next_line_from_file (l_file2id, l_file2line, l_file2eof); check_for_equality (l_file1line, l_file2line, l_identical); END LOOP; cleanup; RETURN l_identical; EXCEPTION WHEN OTHERS THEN cleanup; RETURN FALSE; END files_are_equal; Remove hard-codings - eqfiles_no_literals.sf CREATE OR REPLACE FUNCTION files_are_equal ( file1_name_in IN VARCHAR2 , dir1_name_in IN VARCHAR2 , file2_name_in IN VARCHAR2 , dir2_name_in IN VARCHAR2 := NULL ) RETURN BOOLEAN IS l_file1id UTL_FILE.file_type; l_file1line utl_file_constants.max_linesize_t; l_file1eof BOOLEAN; -l_file2id UTL_FILE.file_type; l_file2line utl_file_constants.max_linesize_t; l_file2eof BOOLEAN; -l_identical BOOLEAN DEFAULT TRUE; PROCEDURE initialize IS BEGIN l_file1id := UTL_FILE.fopen (dir1_name_in , file1_name_in , utl_file_constants.read_only () , utl_file_constants.max_linesize () ); l_file2id := UTL_FILE.fopen (NVL (dir2_name_in, dir1_name_in) , file2_name_in , utl_file_constants.read_only () , utl_file_constants.max_linesize () ); END initialize;

CREATE OR REPLACE PACKAGE utl_file_constants IS SUBTYPE max_linesize_t IS VARCHAR2 (32767); SUBTYPE def_linesize_t IS VARCHAR2 (1024); FUNCTION read_only RETURN VARCHAR2; FUNCTION write_only RETURN VARCHAR2; FUNCTION append RETURN VARCHAR2; FUNCTION min_linesize RETURN PLS_INTEGER; FUNCTION max_linesize RETURN PLS_INTEGER; FUNCTION def_linesize RETURN PLS_INTEGER; END utl_file_constants; Fix problems in comparison logic - eqfiles_moving_parts.sf /* Isolate the comparison logic into a single procedure. Return flags indicating whether or not to continue reading from the file and if the two are still identical */ PROCEDURE check_for_equality ( file1_line_in IN VARCHAR2, file2_line_in IN VARCHAR2, l_file1eof_in IN BOOLEAN, l_file2eof_in IN BOOLEAN, identical_out OUT BOOLEAN, read_next_out OUT BOOLEAN ) IS BEGIN IF l_file1eof_in AND l_file2eof_in THEN /* Made it to the end of both files simultaneously. That's good news! */ identical_out := TRUE; read_next_out := FALSE; ELSIF l_file1eof_in OR l_file2eof_in THEN /* Reached end of one before the other. Not identical! */ identical_out := FALSE; read_next_out := FALSE; ELSE /* Only continue IF the two lines are identical. And if they are both null/empty, consider them to be equal. */ identical_out := file1_line_in = file2_line_in OR (l_file1eof_in IS NULL AND l_file2eof_in IS NULL); read_next_out := identical_out; END IF; END check_for_equality; BEGIN initialize; WHILE (l_keep_checking) LOOP get_next_line (l_file1id, l_file1line, l_file1eof); get_next_line (l_file2id, l_file2line, l_file2eof); check_for_equality (l_file1line

, l_file2line , l_file1eof , l_file2eof , l_identical , l_keep_checking ); END LOOP; cleanup; RETURN l_identical; Bullet-proof the code (assertions, exception mgt) - eqfiles_bullet_proof.sf /* Very simple generic assertion program to increase the likelihood that I will actually file1 to make sure assumptions are being followed. */ PROCEDURE assert (condition_in IN BOOLEAN, msg_in IN VARCHAR2) IS BEGIN IF NOT condition_in OR condition_in IS NULL THEN raise_application_error (-20000, msg_in); END IF; END assert; /* Consolidate all initialization logic: - Validate all assumptions regarding inputs. - Open the files. */ PROCEDURE initialize IS BEGIN /* Make sure inputs are valid. */ assert (dir1_name_in IS NOT NULL, 'Directory cannot be NULL.'); assert (file1_name_in IS NOT NULL, 'File name cannot be NULL.'); assert (file2_name_in IS NOT NULL, 'File name cannot be NULL.'); /* Open both files, read-only. */ l_file1id := UTL_FILE.fopen (dir1_name_in, file1_name_in, utl_file_constants.read_only (), utl_file_constants.max_linesize () ); l_file2id := UTL_FILE.fopen (NVL (dir2_name_in, dir1_name_in), file2_name_in, utl_file_constants.read_only (), utl_file_constants.max_linesize () ); END initialize; PROCEDURE cleanup IS BEGIN /* Close any files that are still open. */ IF UTL_FILE.is_open (l_file1id) THEN UTL_FILE.fclose (l_file1id); END IF;

IF UTL_FILE.is_open (l_file2id) THEN UTL_FILE.fclose (l_file2id); END IF; Bullet-proof II use of q$error_manager - eqfiles_with_qemsf PROCEDURE cleanup (sqlcode_in IN PLS_INTEGER DEFAULT plsql_limits.c_no_error) IS BEGIN /* Close any files that are still open. */ IF UTL_FILE.is_open (l_file1id) THEN UTL_FILE.fclose (l_file1id); END IF; IF UTL_FILE.is_open (l_file2id) THEN UTL_FILE.fclose (l_file2id); END IF; /* If I have an error, then log the information and raise it back out of the function. I am using the Quest Error Manager freeware utility below. www.oracleplsqlprogramming.com/downloads/qem.zip */ IF sqlcode_in <> plsql_limits.c_no_error THEN q$error_manager.raise_error (error_name_in => 'UNANTICIPATED-ERROR', text_in => 'Unexpected error when attempting to compare two files for equality.', name1_in => 'FILE1 DIR', value1_in => dir1_name_in, name2_in => 'FILE2 DIR', value2_in => dir2_name_in, name3_in => 'FILE1 NAME', value3_in => file1_name_in, name4_in => 'FILE2 FILE', value4_in => file2_name_in ); END IF; END cleanup; Reduce moving parts, add program header - eqfiles_moving_parts.sf ("Final" version - some comments removed to use less paper!) CREATE OR REPLACE FUNCTION files_are_equal /* | File name: eqfiles_after_ref.sql | | Overview: Compare two files to see if they have the same contents. | Note: If no "file2 this" directory, then we use the | same directory as "file1 this". | | Author(s): Steven Feuerstein | | Modification History: | Date Who What | 19-AUG-2007 SF Refactored program for PL/SQL Mosaic course | 23-SEP-2005 SF Created program (see eqfiles_before_ref.sf) */ ( file1_name_in IN VARCHAR2, dir1_name_in IN VARCHAR2, file2_name_in IN VARCHAR2, dir2_name_in IN VARCHAR2 := NULL )

RETURN BOOLEAN IS TYPE file_info_rt IS RECORD ( file_id UTL_FILE.file_type, next_line utl_file_constants.max_linesize_t, eof BOOLEAN ); l_file1 file_info_rt; l_file2 file_info_rt; -l_keep_checking BOOLEAN DEFAULT TRUE; l_identical BOOLEAN DEFAULT FALSE; PROCEDURE assert (condition_in IN BOOLEAN, msg_in IN VARCHAR2) IS BEGIN IF NOT condition_in OR condition_in IS NULL THEN raise_application_error (-20000, msg_in); END IF; END assert; PROCEDURE initialize (file1_out OUT file_info_rt, file2_out OUT file_info_rt) IS BEGIN /* Make sure inputs are valid. */ assert (dir1_name_in IS NOT NULL, 'Directory cannot be NULL.'); assert (file1_name_in IS NOT NULL, 'File name cannot be NULL.'); assert (file2_name_in IS NOT NULL, 'File name cannot be NULL.'); /* Open both files, read-only. */ file1_out.file_id := UTL_FILE.fopen (LOCATION => dir1_name_in, filename => file1_name_in, open_mode => utl_file_constants.read_only (), max_linesize => utl_file_constants.max_linesize () ); file2_out.file_id := UTL_FILE.fopen (LOCATION => NVL (dir2_name_in, dir1_name_in), filename => file2_name_in, open_mode => utl_file_constants.read_only (), max_linesize => utl_file_constants.max_linesize () ); END initialize; PROCEDURE get_next_line_from_file (file_inout IN OUT file_info_rt) IS BEGIN UTL_FILE.get_line (file_inout.file_id, file_inout.next_line); file_inout.eof := FALSE; EXCEPTION WHEN NO_DATA_FOUND THEN file_inout.eof := TRUE; END get_next_line_from_file; PROCEDURE check_for_equality ( file1_in IN file_info_rt, file2_in IN file_info_rt, identical_out OUT BOOLEAN, read_next_out OUT BOOLEAN ) IS BEGIN

IF file1_in.eof AND file2_in.eof THEN /* Made it to the end of both files simultaneously. That's good news! */ identical_out := TRUE; read_next_out := FALSE; ELSIF file1_in.eof OR file2_in.eof THEN /* Reached end of one before the other. Not identical! */ identical_out := FALSE; read_next_out := FALSE; ELSE /* Only continue IF the two lines are identical. And if they are both null/empty, consider them to be equal. */ identical_out := file1_in.next_line = file2_in.next_line OR (file1_in.next_line IS NULL AND file2_in.next_line IS NULL); read_next_out := identical_out; END IF; END check_for_equality; PROCEDURE cleanup (sqlcode_in IN PLS_INTEGER DEFAULT plsql_limits.c_no_error) IS BEGIN /* Close any files that are still open. */ IF UTL_FILE.is_open (l_file1.file_id) THEN UTL_FILE.fclose (l_file1.file_id); END IF; IF UTL_FILE.is_open (l_file2.file_id) THEN UTL_FILE.fclose (l_file2.file_id); END IF; /* If I have an error, then log the information and raise it back out of the function. I am using the Quest Error Manager freeware utility below. www.oracleplsqlprogramming.com/downloads/qem.zip */ IF sqlcode_in <> plsql_limits.c_no_error THEN q$error_manager.raise_error (error_name_in => 'UNANTICIPATED-ERROR', text_in => 'Unexpected error when attempting to compare two files for equality.', name1_in => 'FILE1 DIR', value1_in => dir1_name_in, name2_in => 'FILE2 DIR', value2_in => dir2_name_in, name3_in => 'FILE1 NAME', value3_in => file1_name_in, name4_in => 'FILE2 FILE', value4_in => file2_name_in ); END IF; END cleanup; BEGIN initialize (l_file1, l_file2); WHILE (l_keep_checking) LOOP get_next_line_from_file (l_file1); get_next_line_from_file (l_file2);

check_for_equality (l_file1, l_file2, l_identical, l_keep_checking); END LOOP; cleanup; RETURN l_identical; EXCEPTION WHEN OTHERS THEN cleanup (SQLCODE); RAISE; END files_are_equal; / Quseful #10: Oracle11g function result cache The function result cache in Oracle11g is far and away the most important new feature for PL/SQL developers. Suppose you have a table that is queried frequently (let's say thousands of times a minute) but is only updated once or twice an hour. In between those changes, that table is static. Now, most PL/SQL developers have developed a very bad habit: whenever they need to retrieve data they write the necessary SELECT statement directly in their high-level application code. As a result, their application must absorb the overhead of going through the SQL layer in the SGA, over and over again, to get that unchanging data. If, on the other hand, you put that SELECT statement inside its own function and then define that function as a result cache, magical things happen. Namely, whenever anyone in that database instance calls the function, Oracle first checks to see if anyone has already called the function with the same input values. If so, then the cached return value is returned without running the function body. If the inputs are not found, then the function is executed, the inputs and return data is stored in the cache, and then the data is sent back to the user. The data is never queried more than once from the SQL layer - as long as it hasn't changed. As soon as anyone connected to that instance commits changes to a table on which the cache is dependent, Oracle invalidates the cache, so that the data will have to be re-queried (but just once). You are, as would be expected inside an Oracle database, guaranteed to always see clean, correct data. Why would you do this? Because the performance improvements are dramatic. In the 11g_emplu.pkg script (available in the demo.zip ), I compare the performance of a normal database query via a function to a function result cache built around the same query and I see these results: Execute query each time Elapsed: 5.65 seconds. Oracle 11g result cache Elapsed: .30 seconds. Isn't that just amazing and incredible and wonderful? Here's the original version of the function (over 5 seconds): PACKAGEBODY emplu IS FUNCTION onerow (employee_id_in IN employees.employee_id%TYPE) RETURN employees%ROWTYPE IS onerow_rec employees%ROWTYPE; BEGIN SELECT * INTO onerow_rec FROM employees WHERE employee_id = employee_id_in; RETURN onerow_rec; END onerow; END emplu; and here's the result cache version: PACKAGE BODY emplu IS FUNCTION onerow (employee_id_in IN employees.employee_id%TYPE) RETURN employees%ROWTYPE RESULT CACHE RELIES_ON (employees) IS onerow_rec employees%ROWTYPE; BEGIN SELECT *

INTO onerow_rec FROM employees WHERE employee_id = employee_id_in; RETURN onerow_rec; END onerow; END emplu; Can you see the difference? Not much of a change, right? I just added that single RESULT_CACHE line. And notice that I would not have to change any of the code that was already calling this function. Here's the bottom line regarding the function result cache: get ready now to take advantage of this feature. Stop writing SELECT statements directly into your application code. Instead, hide your queries in functions so that you can easily convert to result caches when you upgrade to Oracle11g.

You might also like