Oracle® Application Server Performance Guide
10g Release 2 (10.1.2) B14001-02 |
|
Previous |
Next |
This chapter discusses the techniques for optimizing Oracle HTTP Server performance in Oracle Application Server.
This chapter contains:
Correctly tuned TCP parameters can improve performance dramatically. This section contains recommendations for TCP tuning and a brief explanation of each parameter.
Table 5-1 contains recommended TCP parameter settings and includes references to discussions of each parameter.
Table 5-1 TCP Parameter Settings for Solaris Operating System (SPARC)
Parameter | Setting | Comments |
---|---|---|
32768 |
||
1024 |
||
1024 |
||
32768 |
||
2 |
||
60000 |
See "Specifying Retention Time for Connection Table Entries" |
|
32768 |
Table 5-2 TCP Parameter Settings for HP-UX
Parameter | Scope | Default Value | Tuned Value | Comments |
---|---|---|---|---|
|
|
60,000 |
60,000 |
See "Specifying Retention Time for Connection Table Entries" |
|
|
20 |
1,024 |
|
|
|
600,000 |
60,000 |
|
|
|
7,20,00,000 |
900,000 |
|
|
|
1,500 |
1,500 |
|
|
|
60,000 |
60,000 |
|
|
|
500 |
500 |
|
|
|
32,768 |
32,768 |
|
|
|
32,768 |
32,768 |
Table 5-3 TCP Parameter Settings for Tru64
Parameter | Module | Default value | Tuned Value | Comments |
---|---|---|---|---|
|
|
512 |
16,384 |
|
|
|
1 |
16 (as of 5.0) |
|
|
|
0 |
1 |
|
|
|
16,384 |
65,535 |
|
|
|
16,384 |
65,535 |
|
|
|
1,024 |
65,535 |
|
|
|
0 |
65,535 |
|
|
|
0 |
600 |
|
Table 5-4 TCP Parameter Settings for AIX
Parameter | Model | Default Value | Recommended Value | Comments |
---|---|---|---|---|
|
|
0 |
1 |
|
|
|
65,536 |
1,31,072 |
|
|
|
512 |
1,024 |
|
|
|
50 |
100 |
|
|
|
16,384 |
65,536 |
|
|
|
16,384 |
65,536 |
|
|
|
30 |
150 |
|
Linux only allows you to use 15 bits of the TCP window field. This means that you have to multiply everything by 2, or recompile the kernel without this limitation.
There is no sysctl
application for changing kernel values. You can change the kernel values with an editor such as vi
.
Edit the following files to change kernel values.
Table 5-5 Linux TCP Parameters
Filename | Details |
---|---|
/proc/sys/net/core/rmem_default |
Default Receive Window |
/proc/sys/net/core/rmem_max |
Maximum Receive Window |
/proc/sys/net/core/wmem_default |
Default Send Window |
/proc/sys/net/core/wmem_max |
Maximum Send Window |
You will find some other possibilities to tune TCP in /proc/sys/net/ipv4/
:
tcp_timestamps
tcp_windowscaling
tcp_sack
There is a brief description of TCP parameters in /Documentation/networking/ip-sysctl.txt.
All the preceding TCP parameter values are set by default by a header file in the Linux kernel source directory /LINUX-SOURCE-DIR/include/linux/skbuff.h
These values are the defaults. This is run time configurable.
# ifdef CONFIG_SKB_LARGE #define SK_WMEM_MAX 65535 #define SK_RMEM_MAX 65535 # else #define SK_WMEM_MAX 32767 #define SK_RMEM_MAX 32767 #endif
You can change the MAX-WINDOW
value in the Linux kernel source directory in the file /LINUX-SOURCE-DIR/include/net/tcp.h.
#define MAX_WINDOW 32767 #define MIN_WINDOW 2048
Note: Never assign values greater than 32767 to windows, without using window scaling. |
The MIN_WINDOW
definition limits you to using only 15bits of the window field in the TCP packet header.
For example, if you use a 40kB window, set the rmem_default
to 40kB. The stack will recognize that the value is less than 64 kB, and will not negotiate a winshift. But due to the second check, you will get only 32 kB. So, you need to set the rmem_default
value at greater than 64 kB to force a winshift=1. This lets you express the required 40 kB in only 15 bits.
With the tuned TCP stacks, it was possible to get a maximum throughput between 1.5 and 1.8 Mbits through a 2Mbit satellite link, measured with netperf.
To set the connection table hash parameter for the Solaris Operating System, you must add the following line to the /etc/system
file, and then restart the system:
set tcp:tcp_conn_hash_size=32768
On Tru64, set tcbhashsize
in the /etc/sysconfigtab
file.
A sample script, tcpset.sh
, that changes TCP parameters to the settings recommended here, is included in the $ORACLE_HOME/Apache/Apache/bin/
directory.
Note: If your system is restarted after you run the script, the default settings will be restored and you will have to run the script again. To make the settings permanent, enter them in your system startup file. |
If you have a large user population, you should increase the hash size for the TCP connection table. The hash size is the number of hash buckets used to store the connection data. If the buckets are very full, it takes more time to find a connection. Increasing the hash size reduces the connection lookup time, but increases memory consumption.
Suppose your system performs 100 connections per second. If you set tcp_time_wait_interval
to 60000
, then there will be about 6000
entries in your TCP connection table at any time. Increasing your hash size to 2048
or 4096
will improve performance significantly.
On a system servicing 300 connections per second, changing the hash size from the default of 256
to a number close to the number of connection table entries decreases the average round trip time by up to three to four seconds. The maximum hash size is 262144
. Ensure that you increase memory as needed.
To set the tcp_conn_hash_size
for the Solaris Operating System, add the following line to the /etc/system
file. The parameter will take effect when the system is restarted.
set tcp:tcp_conn_hash_size=32768
On Tru64, set tcbhashsize
in the /etc/sysconfigtab
file.
As described in the previous section, when a connection is established, the data associated with it is maintained in the TCP connection table. On a busy system, much of TCP performance (and by extension web server performance) is governed by the speed with which the entry for a specific TCP connection can be accessed in the connection table. The access speed depends on the number of entries in the table, and on how the table is structured (for example, its hash size). The number of entries in the table depends both on the rate of incoming requests, and on the lifetime of each connection.
For each connection, the server maintains the TCP connection table entry for some period after the connection is closed so it can identify and properly dispose of any leftover incoming packets from the client. The length of time that a TCP connection table entry will be maintained after the connection is closed can be controlled with the tcp_time_wait_interval
parameter. The default for the Solaris Operating System for this parameter is 240,000 ms in accordance with the TCP standard. The four minute setting on this parameter is intended to prevent congestion on the Internet due to error packets being sent in response to packets which should be ignored. In practice, 60,000 ms is sufficient, and is considered acceptable. This setting will greatly reduce the number of entries in the TCP connection table while keeping the connection long enough to discard most, if not all, leftover packets associated with it. We therefore suggest you set:
On HP-UX and for Solaris Operating System 2.7 and higher:
/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 60000
Note: If your user population is widely dispersed with respect to Internet topology, you may want to set this parameter to a higher value. You can improve access time to the TCP connection table with thetcp_conn_hash_size parameter.
|
During the TCP connection handshake, the server, after receiving a request from a client, sends a reply, and waits to hear back from the client. The client responds to the server's message and the handshake is complete. Upon receiving the first request from the client, the server makes an entry in the listen queue. After the client responds to the server's message, it is moved to the queue for messages with completed handshakes. This is where it will wait until the server has resources to service it.
The maximum length of the queue for incomplete handshakes is governed by
tcp_conn_req_max_q0
, which by default is 1024
. The maximum length of the queue for requests with completed handshakes is defined by tcp_conn_req_max_q
, which by default is 128
.
On most web servers, the defaults will be sufficient, but if you have several hundred concurrent users, these settings may be too low. In that case, connections will be dropped in the handshake state because the queues are full. You can determine whether this is a problem on your system by inspecting the values for tcpListenDrop
, tcpListenDropQ0
, and tcpHalfOpenDrop
with netstat -s
. If either of the first two values are nonzero, you should increase the maximums.
The defaults are probably sufficient, but Oracle recommends that you increase the value of tcp_conn_req_max_q
to 1024
. You can set these parameters with:
On the Solaris Operating System:
% /usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q 1024 % /usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q0 1024
On HP-UX:
prompt>/usr/sbin/ndd-set /dev/tcp tcp_conn_req_max 1024
TCP implements a slow start data transfer to prevent overloading a busy segment of the Internet. With slow start, one packet is sent, an acknowledgment is received, then two packets are sent. The number sent to the server continues to be doubled after each acknowledgment, until the TCP transfer window limits are reached.
Unfortunately, some operating systems do not immediately acknowledge the receipt of a single packet during connection initiation. By default, the Solaris Operating System sends only one packet during connection initiation, per the TCP standard. This can increase the connection startup time significantly. We therefore recommend increasing the number of initial packets to two when initiating a data transfer. This can be accomplished using the following command:
% /usr/sbin/ndd -set /dev/tcp tcp_slow_start_initial 2
The size of the TCP transfer windows for sending and receiving data determine how much data can be sent without waiting for an acknowledgment. The default window size is 8192
bytes. Unless your system is memory constrained, these windows should be increased to the maximum size of 32768
. This can speed up large data transfers significantly. Use these commands to enlarge the window:
On Solaris Operating System:
% /usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 32768 % /usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 32768
On HP-UX:
prompt>/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwater_def 32768 prompt>/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwater_def 32768
Because the client typically receives the bulk of the data, it would help to enlarge the TCP receive windows on end users' systems, as well.
On Windows systems, to maximize network performance for the system (after ensuring that memory is sufficient) you should:
Run only the TCP/IP protocol on the system
Use the Maximize Throughput for File Sharing setting for TCP optimization
This section covers the following:
On Windows 2000 systems, to maximize network performance for the system set the maximize throughput for network applications property. To do this, perform the following steps:
On the Windows 2000 Desktop, right click My Network Places and select Properties.
Right click on Local Area Connection and select Properties.
Under Components checked are used by this connection, select File and Printer Sharing for Microsoft Networks.
Click the Properties button and select Maximize data throughput for network applications.
Click OK, and then click OK again.
On Windows 2000 systems, to maximize network performance for the system set the maximize throughput for network applications property. To do this, perform the following steps:
On the Windows 2000 Desktop, right click My Network Places and select Properties.
Right click on Local Area Connection and select Properties.
Under Components checked are used by this connection, select File and Printer Sharing for Microsoft Networks.
Click the Properties button and select Maximize data throughput for network applications.
Click OK, and then click OK again.
On Windows XP systems, to maximize network performance for the system set the maximize throughput for network applications property. To do this, perform the following steps:
Open Network Connections. To open Network Connections, click Start, click Control Panel, click Network and Internet Connections, and then click Network Connections.
Right-click a connection, and then click Properties.
Do one of the following:
If this is a local area connection, on the General tab, in Components checked are used by this connection, click File and Printer Sharing for Microsoft Networks, and then click Properties.
If this is a dial-up, VPN, or incoming connection, on the Networking tab, in Use these components with this connection, click File and Printer Sharing for Microsoft Networks, and then click Properties.
To dedicate as many resources as possible to file and print server services, click Maximize data throughput for file sharing.
You can only configure File and Printer Sharing for Microsoft Networks on a server. To share local folders, you must enable File and Printer Sharing for Microsoft Networks. The File and Printer Sharing for Microsoft Networks component is the equivalent of the Server service in Windows NT 4.0.
Oracle HTTP Server uses directives in httpd.conf
to configure the application server. This configuration file specifies the maximum number of HTTP requests that can be processed simultaneously, logging details, and certain limits and timeouts.
Table 5-6 lists directives that may be significant for performance.
In addition, this section covers the following topics:
Table 5-6 Oracle HTTP Server Configuration Properties
Directive | Description |
---|---|
Specifies the maximum length of the queue of pending connections. Generally no tuning is needed or desired. Note that some Operating Systems do not use exactly what is specified as the backlog, but use a number based on, but normally larger than, what is set. Default Value: 511 |
|
Specifies a limit on the total number of servers running, that is, a limit on the number of clients who can simultaneously connect. If the number of client connections reaches this limit, then subsequent requests are queued in the TCP/IP system up to the limit specified with the The maximum allowed value for Default Value: 150 |
|
The number of requests each child process is allowed to process before the child dies. The child will exit so as to avoid problems after prolonged use when Apache (and maybe the libraries it uses) leak memory or other resources. On most systems, this isn't really needed, but some UNIX systems have notable leaks in the libraries. For these platforms, set This value does not include Note: On Windows systems |
|
|
Server-pool size regulation. Rather than making you guess how many server processes you need, Oracle HTTP Server dynamically adapts to the load it sees, that is, it tries to maintain enough server processes to handle the current load, plus a few spare servers to handle transient load spikes (for example, multiple simultaneous requests from a single Netscape browser). It does this by periodically checking how many servers are waiting for a request. If there are fewer than The default values are probably ok for most sites. Default Values:
|
Number of servers to start initially. If you expect a sudden load after restart, set this value based on the number child servers required. Default Value: 5 |
|
The number of seconds before incoming receives and outgoing sends time out. Default Value: 300 |
|
Whether or not to allow persistent connections (more than one request per connection). Set to Default Value: On |
|
The maximum number of requests to allow during a persistent connection. Set to 0 to allow an unlimited amount. If you have long client sessions, you might want to increase this value. Default Value: 100 |
|
Number of seconds to wait for the next request from the same client on the same connection. Default Value: 15 seconds |
The MaxClients
directive limits the number of clients that can simultaneously connect to your web server, and thus the number of httpd processes. You can configure this parameter in the httpd.conf
file up to a maximum of 8K (the default value is 150).
Tests on a previous release, with static page requests (average size 20K) on a 2 processor, system showed that:
The default MaxClients
setting of 150 was sufficient to saturate the network.
Approximately 60 httpd processes were required to support 300 concurrent users (no think time).
On the system described, and on 4 and 6-processor systems, there was no significant performance improvement in increasing the MaxClients
setting from 150 to 256, based on static page and servlet tests with up to 1000 users.
Increasing MaxClients
when system resources are saturated does not improve performance. When there are no httpd processes available, connection requests are queued in the TCP/IP system until a process becomes available, and eventually clients terminate connections. If you are using persistent connections, you may require more concurrent httpd server processes.
For dynamic requests, if the system is heavily loaded, it might be better to allow the requests to queue in the network (thereby keeping the load on the system manageable). The question for the system administrator is whether a timeout error and retry is better than a long response time. In this case, the MaxClients
setting could be reduced, as a throttle on the number of concurrent requests on the server.
The MaxClients
parameter on UNIX systems works like the ThreadsPerChild
parameter on Windows systems.
The default settings for the KeepAlive
directives are:
KeepAlive on MaxKeepAliveRequests 100 KeepAliveTimeOut 15
These settings allow enough requests per connection and time between requests to reap the benefits of the persistent connections, while minimizing the drawbacks. You should consider the size and behavior of your own user population in setting these values on your system. For example, if you have a large user population and the users make small infrequent requests, you may want to reduce the keepAlive
directive default settings, or even set KeepAlive
to off. If you have a small population of users that return to your site frequently, you may want to increase the settings.
The ThreadsPerChild
parameter in the httpd.conf
file specifies the number of requests that can be handled concurrently by the HTTP server. Requests in excess of the ThreadsPerChild
parameter value wait in the TCP/IP queue. Allowing the requests to wait in the TCP/IP queue often results in the best response time and throughput.
The ThreadsPerChild
parameter on Windows systems works like the MaxClients
parameter on UNIX systems.
The more concurrent threads you make available to handle requests, the more requests your server can process. But be aware that with too many threads, under high load, requests will be handled more slowly and the server will consume more system resources.
In in-house tests of static page requests, a setting of 20 ThreadsPerChild
per CPU produced good response time and throughput results. For example, if you have four CPUs, set ThreadsPerChild
to 80. If, with this setting, CPU utilization does not exceed 85%, you can increase ThreadsPerChild
, but ensure that the available threads are in use.
The mod_oc4j.conf
Oc4jCacheSize
directive specifies the maximum number of idle connections that mod_oc4j
maintains per OC4J JVM. On UNIX systems where each Oracle HTTP Server process is single threaded, the only meaningful values are 1 and zero (0). A value of zero (0) specifies that Oracle HTTP Server should not maintain any connections and should open a new connection for every request. Since each process is single threaded, a process never needs more than one connection and hence and value of 1 or greater has the same effect on UNIX systems.
On Windows systems, the connection cache is shared among threads in the child. If the user's load is all OC4J requests, that is, Oracle HTTP Server serves up little or no content and serves just as a front end for OC4J, then it is a good idea to set Oc4jCacheSize
equal to ThreadsPerChild
. This setting provides a dedicated connection per thread, if needed, and should give the best performance.
This section discusses types of logging, log levels, and the performance implications for using logging.
For static page requests, access logging of the default fields results in a 2-3% performance cost.
By default, the
HostNameLookups
directive is set to Off
. The server writes the IP addresses of incoming requests to the log files. When HostNameLookups
is set to on, the server queries the DNS system on the Internet to find the host name associated with the IP address of each request, then writes the host names to the log.
Performance degraded by about 3% (best case) in Oracle in-house tests with
HostNameLookups
set to on. Depending on the server load and the network connectivity to your DNS server, the performance cost of the DNS lookup could be high. Unless you really need to have host names in your logs in real time, it is best to log IP addresses.
On UNIX systems, you can resolve IP addresses to host names off-line, with the logresolve
utility found in the $ORACLE_HOME/Apache/Apache/bin/
directory.
The server notes unusual activity in an error log. The
ErrorLog
and LogLevel
directives identify the log file and the level of detail of the messages recorded. The default level is warn
. There was no difference in static page performance on a loaded system between the warn
, info
, and debug
levels.
For requests that use dynamic resources, for example requests that use mod_osso
, mod_plsql
, or mod_oc4j
, there is a performance cost associated with setting higher debugging levels, such as the debug
level.
This section covers the following topics:
Secure Sockets Layer (SSL) is a protocol developed by Netscape Communications Corporation that provides authentication and encrypted communication over the Internet. Conceptually, SSL resides between the application layer and the transport layer on the protocol stack. While SSL is technically an application-independent protocol, it has become a standard for providing security over HTTP, and all major web browsers support SSL.
SSL can become a bottleneck in both the responsiveness and the scalability of a web-based application. Where SSL is required, the performance challenges of the protocol should be carefully considered. Session management, in particular session creation and initialization, is generally the most costly part of using the SSL protocol, in terms of performance.
This section covers the following SSL Performance related information:
When an SSL connection is initialized, a session based handshake between client and server occurs that involves the negotiation of a cipher suite, the exchange of a private key for data encryption, and server and, optionally, client authentication through digitally-signed certificates.
After the SSL session state has been initiated between a client and a server, the server can avoid the session creation handshake in subsequent SSL requests by saving and reusing the session state. The Oracle HTTP Server caches a client's Secure Sockets Layer (SSL) session information by default. With session caching, only the first connection to the server incurs high latency.
The SSLSessionCacheTimeout
directive in httpd.conf
determines how long the server keeps a saved SSL session (the default is 300
seconds). Session state is discarded if it is not used after the specified time period, and any subsequent SSL request must establish a new SSL session and begin the handshake again. The SSLSessionCache
directive specifies the location for saved SSL session information, the default location on UNIX is the $ORACLE_HOME/Apache/Apache/logs/
directory or on Windows systems, %ORACLE_HOME%\Apache\Apache\logs\
. Multiple Oracle HTTP Server processes can use a saved session cache file.
Saving SSL session state can significantly improve performance for applications using SSL. For example, in a simple test to connect and disconnect to an SSL-enabled server, the elapsed time for 5 connections was 11.4 seconds without SSL session caching. With SSL session caching enabled, the elapsed time for 5 round trips was 1.9 seconds.
The reuse of saved SSL session state has some performance costs. When SSL session state is stored to disk, reuse of the saved state normally requires locating and retrieving the relevant state from disk. This cost can be reduced when using HTTP persistent connections. Oracle HTTP Server uses persistent HTTP connections by default, assuming they are supported on the client side. In HTTP over SSL as implemented by Oracle HTTP Server, SSL session state is kept in memory while the associated HTTP connection is persisted, a process which essentially eliminates the overhead of SSL session reuse (conceptually, the SSL connection is kept open along with the HTTP connection).
In most applications using SSL, the data encryption cost is small compared with the cost of SSL session management. Encryption costs can be significant where the volume of encrypted data is large, and in such cases the data encryption algorithm and key size chosen for an SSL session can be significant.
In general there is a trade-off between security level and performance. For example, on a modern processor, RSA estimates its RC4 cipher to take in the vicinity of 8-16 machine operations per output byte. Standard DES encryption will incur roughly 8 times the overhead of RC4, and triple DES will take about 25 times the overhead of DES. However, when using triple DES, the encryption costs will not be noticeable in most applications. Oracle HTTP Server supports these three cipher suites, and other cipher suites as well.
Oracle HTTP Server negotiates a cipher suite with a client based on the SSLCipherSuite
attribute specified in httpd.conf
.
The following recommendations can assist you with determining performance requirements when working with Oracle HTTP Server and SSL.
The SSL handshake is an inherently expensive process in terms of both CPU usage and response time. Thus, use SSL only where needed. Determine the parts of the application that require the security, and the level of security required, and protect only those parts at the requisite security level. Attempt to minimize the need for the SSL handshake by using SSL sparingly, and by reusing session state as much as possible. For example, if a page contains a small amount of sensitive data and a number of non-sensitive graphic images, use SSL to transfer the sensitive data only, use normal HTTP to transfer the images. If the application requires server authentication only, do not use client authentication. If the performance goals of an application cannot be met by this method alone, additional hardware may be required.
Design the application to use SSL efficiently. Group secure operations together to take advantage of SSL session reuse and SSL connection reuse.
Use persistent connections, if possible, to minimize cost of SSL session reuse.
Tune the session cache timeout value (the SSLSessionCacheTimeout
attribute in httpd.conf
). A trade-off exists between the cost of maintaining an SSL session cache and the cost of establishing a new SSL session. As a rule, any secured business process, or conceptual grouping of SSL exchanges, should be completed without incurring session creation more than once. The default value for the SSLSessionCacheTimeout
attribute is 300 seconds. It is a good idea to test an application's usability to help tune this setting.
If large volumes of data are being protected through SSL, pay close attention to the cipher suite being used. The SSLCipherSuite
directive specified in httpd.conf
controls the cipher suite. If lower levels of security are acceptable, use a less-secure protocol using a smaller key size (this may improve performance significantly). Finally, test the application using each available cipher suite for the desired security level to find the most performant suite.
Having taken the preceding considerations into account, if SSL remains a bottleneck to the performance and scalability of your application, consider deploying multiple Oracle HTTP Server instances over a hardware cluster or consider the use of SSL accelerator cards.
When OracleAS Port Tunneling is configured, every request processed passes through the OracleAS Port Tunneling infrastructure. Thus, using OracleAS Port Tunneling can have an impact on the overall Oracle HTTP Server request handling performance and scalability.
With the exception of the number of OracleAS Port Tunneling processes to run, the performance of OracleAS Port Tunneling is self tuning. The only performance control available is to start more OracleAS Port Tunneling processes, this increases the number of available connections and hence the scalability of the system.
The number of OracleAS Port Tunneling processes is based on the degree of availability required, and the number of anticipated connections. This number can not be automatically determined because for each additional process a new port must be opened through the firewall between the DMZ and the intranet. You cannot start more processes than you have open ports, and you do not want less processes than open ports, since in this case ports would not have any process bound to them.
To measure the OracleAS Port Tunneling performance, determine the request time for servlet requests that pass through the OracleAS Port Tunneling infrastructure. The response time of an Oracle Application Server instance running with OracleAS Port Tunneling should be compared with a system without OracleAS Port Tunneling to determine whether your performance requirements can be met using OracleAS Port Tunneling.
See Also: Oracle HTTP Server Administrator's Guide for information on configuring OracleAS Port Tunneling |
The following tips can enable you to avoid or debug potential Oracle HTTP Server (OHS) performance problems:
It is important to understand where your server is spending resources so you can focus your tuning efforts in the areas where the most stands to be gained. In configuring your system, it can be useful to know what percentage of the incoming requests are static and what percentage are dynamic.
Static pages can be cached by Oracle Application Server Web Cache, if it is in use. Generally, you want to concentrate your tuning effort on dynamic pages because dynamic pages can be costly to generate. Also, by monitoring and tuning your application, you may find that much of the dynamically generated content, such as catalog data, can be cached, sparing significant resource usage.
In some cases, you may notice a high discrepancy between the average time to process a request in Oracle Application Server Containers for J2EE (OC4J) and the average response time experienced by the user. If the time is not being spent actually doing the work in OC4J, then it is probably being spent in transport.
If you notice a large discrepancy between the request processing time in OC4J and the average response time, consider tuning the Oracle HTTP Server directives shown in the section, "Configuring Oracle HTTP Server Directives".
You can get unrepresentative results when data outliers appear. This can sometimes occur at start-up. To simulate a simple example, assume that you ran a PL/SQL "Hello, World" application for about 30 seconds. Examining the results, you can see that the work was all done in mod_plsql.c
:
/ohs_server/ohs_module/mod_plsql.c handle.maxTime: 859330 handle.minTime: 17099 handle.avg: 19531 handle.active: 0 handle.time: 24023499 handle.completed: 1230
Note that handle.maxTime
is much higher than handle.avg
for this module. This is probably because when the first request is received, a database connection must be opened. Later requests can make use of the established connection. In this case, to obtain a better estimate of the average service time for a PL/SQL module, that does not include the database connection open time which causes the handle.maxTime
to be very large, recalculate the average as in the following:
(time - maxTime)/(completed -1)
For example, in this case this would be:
(24023499 - 859330)/(1230 -1) = 18847.98
At many sites Oracle Application Server uses the Oracle HTTP Server module mod_oc4j
to load balance incoming stateless HTTP requests. By selecting the appropriate load balancing policy for mod_oc4j
you can improve performance on your site.The mod_oc4
j module supports several configurable load balancing policies, including the following:
Round robin routing (this is the default mod_oc4j
load balancing policy)
Random routing
Round robin or random with local affinity routing, using the local
option
Round robin or random with host-level weighted routing, using the weighted
option
Note: For a session based requestmod_oc4j always directs the request to the original OC4J process which created the session, unless the original OC4J process is not available. In case of failure, mod_oc4j sends the request to another OC4J process with the same island name as the original request (either within same host if available, or on a remote host).
|
This section covers the following topics:
This section provides a quick summary of the load balancing configuration you may want to use when configuring mod_oc4
j for Oracle Application Server:
When Oracle Application Server runs in a single host with one or more OC4J Instances, we recommend using either the round robin or random load balancing policy. The performance characteristics for the particular policy can depend on the applications that run on your site; however, in many cases these two policies will yield similar performance.
When Oracle Application Server is configured at a site that uses multiple hosts with the same hardware and Oracle Application Server configurations, we recommend using either round robin with the local affinity option or random with the local affinity option.
When Oracle Application Server is configured at a site that uses multiple hosts with different hardware and different Oracle Application Server configurations, we recommend using either round robin with the weighted option or random with the weighted option. For sites where it is difficult to determine how much load each host can handle, and it is difficult to assign an accurate routing weight, you may want to use either round robin with the local affinity option or random with the local affinity option.
See Also: Oracle HTTP Server Administrator's Guide for a description ofmod_oc4j configuration options
|
Using round robin routing or random routing, without the local
or weighted
options, specifies that mod_oc4j
creates a list of all the available OC4J processes across all hosts. For incoming requests, mod_oc4j
routes the requests using the list of available OC4J processes, either selecting processes from the list randomly, or using a round robin selection policy (with the round robin, the first request is selected randomly, and requests after that are selected using the round robin policy.
If you use either of these load balancing policies, you need to consider the number of OC4J processes that you run on each host. Without specifying the weighted routing option for mod_oc4j
, if you configure your site to start different numbers of OC4J processes on each host, this causes an implicit weighting to occur where more requests are sent to hosts with more OC4J processes. If this implicit weighting of requests by the number of OC4J processes per host is not what you want, then you should consider specifying a routing weight for each host and using the weighted option.
Note: In many cases the round robin and random policies will yield similar performance. |
For example, if you use the default round robin load balancing policy and you start 4 OC4J processes on Host_A
and 1 OC4J process on Host_B
, then mod_oc4j
sends 4 requests to Host_A
for each 1 request that it sends to Host_B
. Thus, with this configuration you are implicitly sending 4 times as many requests to Host_A
.
Selecting the local affinity option tells mod_oc4j
to always try to select the local OC4J instance to service incoming requests. When no local OC4J processes are available, mod_oc4j
selects from a list of available remote OC4J processes. You can select either the round robin or the random policies with the local affinity option.
For example to select the round robin policy with local affinity, specify the following directive in mod_oc4j.conf
:
Oc4jSelectMethod roundrobin:local
Selecting the weighted routing option specifies that mod_oc4j
should distribute HTTP requests across the available hosts and use a specified routing weight to calculate the distribution of incoming requests that are sent to each host. The routing weight is specified with the Oc4jRoutingWeight
directive. You can specify either the round robin or the random policies with the weighted option.
For example, if the routing weight set for Host_A
is 3 and the routing weight set for Host_B
is 1, this specifies that Host_A
should be sent three times the number of requests as compared to Host_B
.
Note: Using weighted routing, incoming requests are routed according the specified routing weight and without consideration for the number of OC4J processes running on each host. |
To configure the mod_oc4j
module in Oracle HTTP Server to specify round robin with a routing weight of 3 for Host_A
and a routing weight of 1 for Host_B
, add the following directives to mod_oc4j.con
f:
Oc4jSelectMethod roundrobin:weighted Oc4jRoutingWeight Host_A 3
In this example you do not need to specify a routing weight for Host_B
, since the default routing weight is 1.
You need to determine the routing weight for each system based on what other components are running on the systems and based on how many requests each system can adequately handle.
Note: An inaccurate specification for the routing weight could have negative performance implications for your site. |
In general, when configuring the mod_oc4j
load balancing policy, we recommend the following:
If you have multiple systems with similar hardware configuration use round robin with local affinity or random load balancing policy with local affinity.
For example, if you have multiple hosts with the same number of CPUs with same speed, and the same memory, with Oracle HTTP Server running with the same number of OC4J processes on each host, and you are using a hardware load balancer or web cache in the front end to route the requests to Oracle HTTP Server on each host, then, using either round robin with local affinity or random with local affinity is recommended.
If you have multiple systems, each with a similar hardware configuration, and you want to run Oracle HTTP Server only on one host, then select either the round robin with the weighted option or random with the weighted option.
For example, consider a site with 2 hosts, Host_A
and Host_B
, each with 2 CPUs. On this site you only run Oracle HTTP Server on Host_A
, and each host includes one OC4J instance with one OC4J process. With this configuration, selecting round robin with the weighted option or random with the weighted option, and using a higher routing weight on Host_B
will help to shift more requests to Host_B
. Since Host_B is not running Oracle HTTP Server this configuration should provide better performance for this site.
If you are running Oracle HTTP Server on a separate system which routes the HTTP web requests to multiple hosts running only OC4J and the systems use similar hardware with the same number of OC4J processes, then use round robin or random load balancing policy.
If you are running Oracle HTTP Server, OC4J and other Oracle Application Server components on multiple systems which have different hardware configurations, use round robin with the weighted option or random with the weighted option to help distribute requests to each system.
You need to determine the routing weight for each system based on what other components are running on the systems and based on how many requests each system can adequately handle.