PL/SQL User's Guide and Reference Release 2 (9.2) Part Number A96624-01 |
|
Every day, in every way, I am getting better and better. --Émile Coué
Over time, even the performance of well-designed applications can degrade. So, periodic tuning is an important part of application maintenance. This chapter shows you how to make small adjustments that improve performance. By tuning your applications, you can make sure they continue to deliver the required response time and throughput.
This chapter discusses the following topics:
When a PL/SQL-based application performs poorly, it is often due to badly written SQL statements, poor programming practices, inattention to PL/SQL basics, or misuse of shared memory.
PL/SQL programs look relatively simple because their complexity is hidden in SQL statements, which do most of the work. That is why badly written SQL statements are the main reason for slow execution. If a program contains many badly written SQL statements, no number of well-written PL/SQL statements will help.
If badly written SQL statements are slowing down your program, analyze their execution plans and performance using the methods listed below. Then, rewrite the SQL statements. For example, hints to the query optimizer can eliminate problems such as unnecessary full-table scans.
For more information about these methods, see Oracle9i Database Performance Planning.
Often, poor programming practices are a side effect of schedule crunches. In such circumstances, even experienced programmers might write code that hampers performance.
No matter how suitable a programming language is for a given task, badly written subprograms (for example, a slow sort or search function) can ruin performance. Suppose the subprogram called most often by an application is a lookup function with hundreds of possible targets. If that function could be written as a hash or a binary search but, instead, is written as a linear search, overall performance suffers.
Other poor practices include declaring variables that are never used, passing unneeded parameters to functions and procedures, placing initializations or computations inside a loop needlessly, and so on.
PL/SQL provides many highly optimized functions such as REPLACE
, TRANSLATE
, SUBSTR
, INSTR
, RPAD
, and LTRIM
. Do not hand code your own versions. Built-in functions are more efficient. Even when a built-in function has more power than you need, use it rather than hand-coding a subset of its functionality.
When evaluating a logical expression, PL/SQL uses short-circuit evaluation. That is, PL/SQL stops evaluating the expression as soon as the result can be determined. For example, in the following OR
expression, when the value of sal
is less than 1500
, the left operand yields TRUE
, so PL/SQL need not evaluate the right operand (because OR
returns TRUE
if either of its operands is true):
IF (sal < 1500) OR (comm IS NULL) THEN ... END IF;
Now, consider the following AND
expression:
IF credit_ok(cust_id) AND (loan < 5000) THEN ... END IF;
The Boolean function credit_ok
is always called. However, if you switch the operands of AND
as follows
IF (loan < 5000) AND credit_ok(cust_id) THEN ... END IF;
the function is called only when the expression loan
<
5000
is true (because AND
returns TRUE
only if both its operands are true).
The same idea applies to EXIT
-WHEN
statements.
At run time, PL/SQL converts between structurally different datatypes implicitly. For instance, assigning a PLS_INTEGER
variable to a NUMBER
variable results in a conversion because their internal representations are different.
Avoiding implicit conversions can improve performance. Look at the example below. The integer literal 15
is represented internally as a signed 4-byte quantity, so PL/SQL must convert it to an Oracle number before the addition. However, the floating-point literal 15.0
is represented as a 22-byte Oracle number, so no conversion is necessary.
DECLARE n NUMBER; c CHAR(5); BEGIN n := n + 15; -- converted n := n + 15.0; -- not converted ... END;
Here is another example:
DECLARE c CHAR(5); BEGIN c := 25; -- converted c := '25'; -- not converted ... END;
The datatype NUMBER
and its subtypes are 22-byte, database-format numbers, designed for portability and arbitrary scale/precision, not performance. When you need to declare an integer variable, use the datatype PLS_INTEGER
, which is the most efficient numeric type. That is because PLS_INTEGER
values require less storage than INTEGER
or NUMBER
values. Also, PLS_INTEGER
operations use machine arithmetic, so they are faster than BINARY_INTEGER
, INTEGER
, or NUMBER
operations, which use library arithmetic.
Furthermore, INTEGER
, NATURAL
, NATURALN
, POSITIVE
, POSITIVEN
, and SIGNTYPE
are constrained subtypes. So, their variables require precision checking at run time, which can affect performance.
In PL/SQL, using the NOT
NULL
constraint incurs a performance cost. Consider the following example:
PROCEDURE calc_m IS m NUMBER NOT NULL := 0; a NUMBER; b NUMBER; BEGIN ... m := a + b; ... END;
Because m
is constrained by NOT
NULL
, the value of the expression a + b
is assigned to a temporary variable, which is then tested for nullity. If the variable is not null, its value is assigned to m
. Otherwise, an exception is raised. However, if m
were not constrained, the value would be assigned to m
directly.
A more efficient way to write the last example follows:
PROCEDURE calc_m IS m NUMBER; -- no constraint a NUMBER; b NUMBER; BEGIN ... m := a + b; IF m IS NULL THEN -- enforce constraint programmatically ... END IF; END;
Note that the subtypes NATURALN
and POSTIVEN
are defined as NOT
NULL
. So, using them incurs the same performance cost.
The VARCHAR2
datatype involves a trade-off between memory use and efficiency. For a VARCHAR2(>=
2000)
variable, PL/SQL dynamically allocates only enough memory to hold the actual value. However, for a VARCHAR2(<
2000)
variable, PL/SQL preallocates enough memory to hold a maximum-size value. So, for example, if you assign the same 500-byte value to a VARCHAR2(2000)
variable and to a VARCHAR2(1999)
variable, the latter uses 1499 bytes more memory.
When you call a packaged subprogram for the first time, the whole package is loaded into the shared memory pool. So, subsequent calls to related subprograms in the package require no disk I/O, and your code executes faster. However, if the package is aged out of memory, it must be reloaded if you reference it again.
You can improve performance by sizing the shared memory pool correctly. Make sure it is large enough to hold all frequently used packages but not so large that memory is wasted.
Another way to improve performance is to pin frequently used packages in the shared memory pool. When a package is pinned, it is not aged out by the least recently used (LRU) algorithm that Oracle normally uses. The package remains in memory no matter how full the pool gets or how frequently you access the package.
You can pin packages with the help of the supplied package DBMS_SHARED_POOL
. For more information, see Oracle9i Supplied PL/SQL Packages and Types Reference.
To help you manage the use of memory, PL/SQL provides the pragma SERIALLY_REUSABLE
, which lets you mark some packages as serially reusable. You can so mark a package if its state is needed only for the duration of one call to the server (for example, an OCI call to the server or a server-to-server RPC).
The global memory for such packages is pooled in the System Global Area (SGA), not allocated to individual users in the User Global Area (UGA). That way, the package work area can be reused. When the call to the server ends, the memory is returned to the pool. Each time the package is reused, its public variables are initialized to their default values or to NULL
.
The maximum number of work areas needed for a package is the number of concurrent users of that package, which is usually much smaller than the number of logged-on users. The increased use of SGA memory is more than offset by the decreased use of UGA memory. Also, Oracle ages-out work areas not in use if it needs to reclaim SGA memory.
For bodiless packages, you code the pragma in the package spec using the following syntax:
PRAGMA SERIALLY_REUSABLE;
For packages with a body, you must code the pragma in the spec and body. You cannot code the pragma only in the body. The following example shows how a public variable in a serially reusable package behaves across call boundaries:
CREATE PACKAGE pkg1 IS PRAGMA SERIALLY_REUSABLE; num NUMBER := 0; PROCEDURE init_pkg_state(n NUMBER); PROCEDURE print_pkg_state; END pkg1; / CREATE PACKAGE BODY pkg1 IS PRAGMA SERIALLY_REUSABLE; PROCEDURE init_pkg_state (n NUMBER) IS BEGIN pkg1.num := n; END; PROCEDURE print_pkg_state IS BEGIN dbms_output.put_line('Num: ' || pkg1.num); END; END pkg1; / BEGIN /* Initialize package state. */ pkg1.init_pkg_state(4); /* On same server call, print package state. */ pkg1.print_pkg_state; -- prints 4 END; / -- subsequent server call BEGIN -- the package's public variable is initialized -- to the default value automatically pkg1.print_pkg_state; -- prints 0 END;
For more information, see Oracle9i Application Developer's Guide - Fundamentals.
As you develop larger and larger PL/SQL applications, it becomes more difficult to isolate performance problems. So, PL/SQL provides a Profiler API to profile run-time behavior and to help you identify performance bottlenecks. PL/SQL also provides a Trace API for tracing the execution of programs on the server. You can use Trace to trace the execution by subprogram or exception.
The Profiler API is implemented as PL/SQL package DBMS_PROFILER
, which provides services for gathering and saving run-time statistics. The information is stored in database tables, which you can query later. For example, you can learn how much time was spent executing each PL/SQL line and subprogram.
To use the Profiler, you start the profiling session, run your application long enough to get adequate code coverage, flush the collected data to the database, then stop the profiling session. In a typical session, you take the following steps:
start_profiler
in package DBMS_PROFILER
and associating a comment with the Profiler session.flush_data
repeatedly to save incremental data and free memory allocated for data structures.stop_profiler
.The Profiler traces the execution of your program, computing the time spent at each line and in each subprogram. You can use the collected data to improve performance. For instance, you might focus on subprograms that run slowly.
For more information about package DBMS_PROFILER
, see Oracle9i Supplied PL/SQL Packages and Types Reference.
The next step is to determine why more time was spent executing certain code segments or accessing certain data structures. Find the problem areas by querying the performance data. Focus on the subprograms and packages that use up the most execution time, inspecting possible performance bottlenecks such as SQL statements, loops, and recursive functions.
Use the results of your analysis to rework slow algorithms. For example, due to an exponential growth in data, you might need to replace a linear search with a binary search. Also, look for inefficiencies caused by inappropriate data structures, and, if necessary, replace those data structures.
With large, complex applications, it becomes difficult to keep track of calls between subprograms. By tracing your code with the Trace API, you can see the order in which subprograms execute. The Trace API is implemented as PL/SQL package DBMS_TRACE
, which provides services for tracing execution by subprogram or exception.
To use Trace, you start the tracing session, run your application, then stop the tracing session. As the program executes, trace data is collected and stored in database tables. In a typical session, you take the following steps:
set_plsql_trace
in package DBMS_TRACE
.clear_plsql_trace
.For more information about package DBMS_TRACE
, see Oracle9i Supplied PL/SQL Packages and Types Reference.
Tracing large applications can produce huge amounts of data that are difficult to manage. Before starting Trace, you can optionally limit the volume of data collected by selecting specific subprograms for trace data collection.
In addition, you can choose a tracing level. For example, you can choose to trace all subprograms and exceptions, or you can choose to trace selected subprograms and exceptions.
After correcting the flaws that slow down an application, you can use the following PL/SQL features and techniques:
These easy-to-use features can speed up an application considerably.
Some programs (a general-purpose report writer for example) must build and process a variety of SQL statements at run time. So, their full text is unknown until then. Such statements can, and probably will, change from execution to execution. So, they are called dynamic SQL statements.
Formerly, to execute dynamic SQL statements, you had to use the supplied package DBMS_SQL
. Now, within PL/SQL, you can execute any kind of dynamic SQL statement using an interface called native dynamic SQL.
Native dynamic SQL is easier to use and much faster than the DBMS_SQL
package. In the following example, you declare a cursor variable, then associate it with a dynamic SELECT
statement that returns rows from database table emp
:
DECLARE TYPE EmpCurTyp IS REF CURSOR; emp_cv EmpCurTyp; my_ename VARCHAR2(15); my_sal NUMBER := 1000; BEGIN OPEN emp_cv FOR 'SELECT ename, sal FROM emp WHERE sal > :s' USING my_sal; ... END;
For more information, see Chapter 11.
When SQL statements execute inside a loop using collection elements as bind variables, context switching between the PL/SQL and SQL engines can slow down execution. For example, the following UPDATE
statement is sent to the SQL engine with each iteration of the FOR
loop:
DECLARE TYPE NumList IS VARRAY(20) OF NUMBER; depts NumList := NumList(10, 30, 70, ...); -- department numbers BEGIN ... FOR i IN depts.FIRST..depts.LAST LOOP ... UPDATE emp SET sal = sal * 1.10 WHERE deptno = depts(i); END LOOP; END;
In such cases, if the SQL statement affects four or more database rows, the use of bulk binds can improve performance considerably. For example, the following UPDATE
statement is sent to the SQL engine just once, with the entire nested table:
FORALL i IN depts.FIRST..depts.LAST UPDATE emp SET sal = sal * 1.10 WHERE deptno = depts(i);
To maximize performance, rewrite your programs as follows:
INSERT
, UPDATE
, or DELETE
statement executes inside a loop and references collection elements, move it into a FORALL
statement.SELECT
INTO
, FETCH
INTO
, or RETURNING
INTO
clause references a collection, incorporate the BULK
COLLECT
clause.SAVE EXCEPTIONS
in the FORALL
statement and report or clean up the errors in a subsequent loop using the %BULK_EXCEPTIONS
attribute.These are not a trivial tasks. They require careful analysis of program control-flows and dependencies.
For more information about bulk binding, see "Reducing Loop Overhead for Collections with Bulk Binds".
By default, OUT
and IN
OUT
parameters are passed by value. That is, the value of an IN
OUT
actual parameter is copied into the corresponding formal parameter. Then, if the subprogram exits normally, the values assigned to OUT
and IN
OUT
formal parameters are copied into the corresponding actual parameters.
When the parameters hold large data structures such as collections, records, and instances of object types, all this copying slows down execution and uses up memory. To prevent that, you can specify the NOCOPY
hint, which allows the PL/SQL compiler to pass OUT
and IN
OUT
parameters by reference. In the following example, you ask the compiler to pass IN
OUT
parameter my_unit
by reference instead of by value:
DECLARE TYPE Platoon IS VARRAY(200) OF Soldier; PROCEDURE reorganize (my_unit IN OUT NOCOPY Platoon) IS ... BEGIN ... END;
For more information, see "Passing Large Data Structures with the NOCOPY Compiler Hint".
Often, applications need information about the row affected by a SQL operation, for example, to generate a report or take a subsequent action. The INSERT
, UPDATE
, and DELETE
statements can include a RETURNING
clause, which returns column values from the affected row into PL/SQL variables or host variables. This eliminates the need to SELECT
the row after an insert or update, or before a delete. As a result, fewer network round trips, less server CPU time, fewer cursors, and less server memory are required.
In the following example, you update the salary of an employee and at the same time retrieve the employee's name and new salary into PL/SQL variables.
PROCEDURE update_salary (emp_id NUMBER) IS name VARCHAR2(15); new_sal NUMBER; BEGIN UPDATE emp SET sal = sal * 1.1 WHERE empno = emp_id RETURNING ename, sal INTO name, new_sal; -- Now do computations involving name and new_sal END;
PL/SQL provides an interface for calling routines written in other languages. Standard libraries already written and available in other languages can be called from PL/SQL programs. This promotes reusability, efficiency, and modularity.
PL/SQL is specialized for SQL transaction processing. Some tasks are more quickly done in a lower-level language such as C, which is very efficient at machine-precision calculations.
To speed up execution, you can rewrite computation-bound programs in C. In addition, you can move such programs from client to server, where they will execute faster thanks to more computing power and less across-network communication.
For example, you can write methods for an image object type in C, store them in a dynamic link library (DLL), register the library with PL/SQL, then call it from your applications. At run time, the library loads dynamically and, for safety, runs in a separate address space (implemented as a separate process).
For more information, see Oracle9i Application Developer's Guide - Fundamentals.
Collection types (see Chapter 5) and object types (see Chapter 10) increase your productivity by allowing for realistic data modeling. Complex real-world entities and relationships map directly into object types. And, a well-constructed object model can improve application performance by eliminating table joins, reducing round trips, and the like.
Client programs, including PL/SQL programs, can declare objects and collections, pass them as parameters, store them in the database, retrieve them, and so on. Also, by encapsulating operations with data, object types let you move data-maintenance code out of SQL scripts and PL/SQL blocks into methods.
Objects and collections are more efficient to store and retrieve because they can be manipulated as a whole. Also, object support is integrated with the database architecture, so it can take advantage of the many scalability and performance improvements built into each Oracle release.
You can speed up PL/SQL procedures by compiling them into native code residing in shared libraries. The procedures are translated into C code, then compiled with your usual C compiler and linked into the Oracle process. You can use this technique with both the supplied Oracle packages, and procedures you write yourself. Procedures compiled this way work in a variety of server environments, such as the shared server configuration (formerly known as multi-threaded server).
Because this technique cannot do much to speed up SQL statements called from PL/SQL, it is most effective for compute-intensive PL/SQL procedures that do not spend most of their time executing SQL.
To speed up one or more procedures using this technique:
$ORACLE_HOME/plsql/spnc_makefile.mk
.ALTER SYSTEM
or ALTER SESSION
command, or update your initialization file, to set the parameter PLSQL_COMPILER_FLAGS
to include the value NATIVE
. The default setting includes the value INTERPRETED
, and you must remove this keyword from the parameter value.ALTER PROCEDURE
or ALTER PACKAGE
command to recompile the procedure or the entire package.CREATE OR REPLACE
to recompile the procedure.PLSQL_COMPILER_FLAGS=NATIVE.
During database creation, the UTLIRP
script is run to compile all the Oracle-supplied packages.USER_STORED_SETTINGS
, DBA_STORED_SETTINGS
, and ALL_STORED_SETTINGS
. For example, to check the status of the procedure MY_PROC
, you could enter:
SELECT param_value FROM user_stored_settings WHERE param_name = 'PLSQL_COMPILER_FLAGS' and object_name = 'MY_PROC';
The PARAM_VALUE
column has a value of NATIVE
for procedures that are compiled for native execution, and INTERPRETED
otherwise.
After the procedures are compiled and turned into shared libraries, they are automatically linked into the Oracle process. You do not need to restart the database, or move the shared libraries to a different location. You can call back and forth between stored procedures, whether they are all compiled in the default way (interpreted), all compiled for native execution, or a mixture of both.
Because the PLSQL_COMPILER_FLAGS
setting is stored inside the library unit for each procedure, procedures compiled for native execution are compiled the same way when the procedure is recompiled automatically after being invalidated, such as when a table that it depends on is recreated.
You can control the behavior of PL/SQL native compilation through the ALTER SYSTEM
or ALTER SESSION
commands or by setting or changing these parameters in the initialization file:
PLSQL_COMPILER_FLAGS
PLSQL_NATIVE_LIBRARY_DIR
(cannot be set by ALTER SESSION
for security reasons)PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT
PLSQL_NATIVE_MAKE_UTILITY
PLSQL_NATIVE_MAKE_FILE_NAME
See Also:
Oracle9i Database Reference for complete details about the initialization parameters and data dictionary views. |
connect scott/tiger; set serveroutput on; alter session set plsql_native_library_dir='/home/orauser/lib'; alter session set plsql_native_make_utility='gmake'; alter session set plsql_native_make_file_name='/home/orauser/spnc_makefile.mk'; alter session set plsql_compiler_flags='NATIVE'; create or replace procedure hello_native_compilation as begin dbms_output.put_line('Hello world'); select sysdate from dual; end;
As the procedure is compiled, you see the various compilation and link commands being executed. The procedure is immediately available to call, and runs as a shared library directly within the Oracle process.
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT
in the initialization file before creating the database or compiling the PL/SQL packages or procedures. Set this parameter to a value (typically around 1000). Then create subdirectories underneath the directory specified in the PLSQL_NATIVE_LIBRARY_DIR
parameter. Name the subdirectories d0
, d1
, d2 ... d999
, up to the value specified for the subdirectory count. When the procedures are compiled for native execution, the DLLs are automatically distributed among these subdirectories by the PL/SQL compiler.
|
Copyright © 1996, 2002 Oracle Corporation. All Rights Reserved. |
|