Oracle® Application Server Concepts
10g Release 2 (10.1.2) B13994-02 |
|
Previous |
Next |
This chapter provides an overview of Oracle Application Server performance and caching features and benefits. The topics include:
Increasing the performance of your Web site and increasing the speed of your applications without redesigning or rebuilding the Web site are common goals. To maximize Oracle Application Server performance, all components need to be monitored, analyzed, and tuned. Performance must be built in to an application deployment; you must anticipate performance requirements during application analysis and design, and balance the costs and benefits of optimal performance.
The overall performance of an application is determined by these factors:
How many resources are available?
How many clients need the resource?
How long must they wait for the resource?
How long do they hold the resource?
The following concepts are fundamental to understanding performance:
Response time: The response time is equal to the service time plus the wait time for a task to complete. You can increase response time performance by reducing the service time, the wait time, or both. For example, you can decrease wait time by implementing parallel processing with multiple resources, such that more resources are available to the incoming tasks. Oracle HTTP Server processes requests in this way, allocating client requests to available httpd
processes.
System throughput: System throughput is the amount of work accomplished in a given amount of time. You can increase throughput with a combination of reducing service time and reducing the overall response time by increasing the amount of scarce resources that are available. For example, if the system CPU is bound, then adding CPU resources should improve performance.
Wait time: While the service time for a task may stay the same, the wait time will lengthen with increased contention. If many users are waiting for a service that takes one second, the tenth user must wait nine seconds. Reducing contention should improve performance.
Critical resources: Resources such as CPU, memory, I/O capacity, and network bandwidth are key to reducing service time. Adding resources increases throughput and reduces response time.
As the number of requests rises, the time to service completion increases if the number of resources stays the same. To improve performance, you can either limit the demand rate to maintain acceptable response times, or you can add resources.
Achieving optimal effectiveness in your system requires planning, monitoring, and periodic adjustment. The first step in performance tuning is to determine the goals you need to achieve, and then design effective usage of available technology into your applications. After implementing your system, you must periodically monitor and adjust your system.
Whether you are designing or maintaining a system, you should set specific performance goals so that you know how and what to optimize. If you alter parameters without a specific goal in mind, you can waste time tuning your system without significant gain.
An example of a specific performance goal is an order entry response time under three seconds. If the application does not meet that goal, identify the cause and take corrective action. During development, test the application to determine if it meets the desired performance goals.
Application developers, database administrators, and system administrators must be careful to set appropriate performance expectations for users. When the system carries out particularly complicated operations, response time may be slower than when it is performing a simple operation. Users should be made aware of which operations might take longer.
With clearly defined performance goals, you can readily determine when performance tuning has been successful. Success depends on the functional objectives you have established with the user community, your ability to measure whether or not criteria are being met, and your ability to take corrective action to overcome any performance issues.
Ongoing performance monitoring enables you to maintain a well-tuned system. Keeping a history of the application's performance over time enables you to make useful comparisons. With data about actual resource consumption for a range of loads, you can conduct objective scalability studies and from these predict the resource requirements for anticipated load volumes.
In order to improve the performance of your applications, you have to consider the various factors that influence performance and make changes to your system as necessary.
Performance spans several areas:
Sizing and configuration: Determining the type of hardware needed to support your performance goals
Parameter tuning: Setting configurable parameters to achieve the best performance for your application
Performance monitoring: Determining what hardware resources are being used by your application and what response time your users are experiencing
Troubleshooting: Diagnosing why an application is using excessive hardware resources, or why the response time exceeds the desired limit
Excessive demand increases response time and reduces throughput. If the demand rate exceeds the achievable throughput, then you must determine through monitoring which resource is exhausted, and if possible increase that resource.
Performance problems can be relieved by making adjustments to the following:
Unit consumption: Reducing the resource consumption of each request can improve performance. This might be achieved by pooling and caching.
Functional demand: Rescheduling or redistributing the work can improve performance in some cases.
Capacity: Increasing or reallocating resources can improve performance in some cases.
Tuning usually involves a series of trade-offs. After you have determined the bottlenecks, you may have to modify performance in some other areas to achieve the desired results. For example, if I/O is a problem, you may need to purchase more memory or more disks. If a purchase is not possible, you may have to limit the concurrency of the system to achieve the desired performance. However, if you have clearly defined goals for performance, the decision on what to trade for higher performance is easier because you have identified the most important areas.
Caching is one of the key technologies that promises to alleviate the computational and economic burdens faced by today's overstrained e-business infrastructures. Nearly all applications benefit from having Web content cached on hosts between the consumers searching for content and the content source itself. When applied to Web applications, caching is essentially a technique for storing partial or complete Web pages, both static and dynamic, in memory closer to the browser to address the problem of slow access to Web sites.
A practical caching solution must do the following tasks:
serve dynamic content, ensuring freshness
handle thousands of concurrent users at high sustained rates of throughput
provide fast response times
support local and global deployments
integrate with other caching techniques
post-process cached content
provide high gains with low-cost infrastructure
Caching solutions can be employed in different tiers. Each solution targets a specific tier and presents certain capabilities. It is important to note that response time is the cumulation of the times to access different tiers in your architecture. Most often a complete solution is a combination of one or more caching solutions. Caching solutions include browser caching, proxy caching, content delivery network services, and server accelerators.
The following sections describe Oracle Application Server Web Cache, the middle-tier server acceleration and load balancing component of Oracle Application Server.
A server accelerator is a cache and compression engine that stands in for one or more specific Web servers, rather than working on behalf of a group of browser users. A server accelerator cache, or "reverse proxy" cache, intercepts all requests to the Web servers, caches a copy of the objects served, and then serves those objects when it next receives requests for them. As the server accelerator's cache becomes populated, it is able to serve more of the requested content itself, freeing up processing resources in the application server and the database for other tasks. Server accelerators also help cut costs, as they are implemented on inexpensive platforms and take some of the load off of more expensive back-end content generation systems.
Oracle Application Server Web Cache is a powerful, state-of-the-art server acceleration and load balancing solution. OracleAS Web Cache offers intelligent caching, page assembly, and compression features that distinguish it from all other Web caching solutions on the market. Unlike legacy proxy servers, which cache only static objects, OracleAS Web Cache accelerates the delivery of both static and dynamic Web content, improving response time for feature-rich pages.
OracleAS Web Cache also supports Edge Side Includes (ESI) for performing page assembly in at the network edge. OracleAS Web Cache leverages this technology to enable partial-page caching and dynamic page assembly using both cacheable and non-cacheable page fragments. In this way, OracleAS Web Cache optimizes the delivery of rich, personalized content.
Deployed before a farm of application servers or globally at the network edge, OracleAS Web Cache provides load balancing, failover, clustering, and surge protection features for application servers.
In the simplest of deployment scenarios, OracleAS Web Cache is positioned in front of one or more Web servers to cache and compress content generated by those servers. OracleAS Web Cache then delivers that content to Web browsers. When Web browsers access the Web site, they send HTTP or HTTPS requests to OracleAS Web Cache, which acts as a virtual server for the Web site, masking the existence of the application server farm and the database. If the requested content has changed, OracleAS Web Cache retrieves the new content from the application servers according to the relative load on each server.
OracleAS Web Cache can be deployed on the same host as the origin application server (co-located) or on a separate node of its own (dedicated). Figure 9-1 shows a dedicated OracleAS Web Cache deployment.
Because OracleAS Web Cache consumes memory, co-location is only viable if the cache and the application servers do not contend for resources.
Dedicated deployment is often preferable to co-located deployment. In a dedicated scenario, there is no risk of resource contention with other server processes. OracleAS Web Cache also performs well on commodity hardware, so a dedicated deployment need not be a costly one in terms of hardware expenditure. For very high-volume Web sites, and to avoid a single point of failure, two or more hosts running OracleAS Web Cache may be deployed behind a third-party network load balancing device.
Cache hierarchies: OracleAS Web Cache offers hierarchical caching features that enable customers to easily create Content Delivery Networks (CDNs). Many Web-based applications mirror their Web sites in strategic geographical locations. Caching provides a low-cost alternative to mirroring, and can also be used to serve local markets to shorten response times to these markets and to reduce bandwidth and rack space costs for the content provider. Additionally, in a distributed cache hierarchy, the central cache is aware of the local caches. As a result, any content invalidation messages sent to the central cache automatically propagate to these remote caches. This invalidation propagation ensures content consistency across the CDN and simplifies the management of cache hierarchies.
Using OracleAS Web Cache in heterogeneous environments: While integrated with Oracle Application Server, OracleAS Web Cache is also compatible with third-party application servers, databases, and content management systems.
Oracle Application Server Web Cache is a powerful solution for accelerating Web-based applications. The key features of Oracle Application Server Web Cache are divided into three categories:
Oracle Application Server Web Cache uses compression, caching, page assembly, and invalidation technologies to speed the delivery of dynamically-generated content and make more efficient use of low-cost hardware.
You can select to have OracleAS Web Cache compress both cacheable and non-cacheable documents. Because compressed documents are smaller in size, they are delivered to browsers faster with fewer round-trips, reducing overall latency. OracleAS Web Cache is able to compress text files by up to a factor of 10.
OracleAS Web Cache uses cacheablilty rules to store documents. There are rules for storing static content, and also rules for storing dynamically-generated content created using technologies such as JavaServer Pages (JSP). Supporting dynamic content caching allows OracleAS Web Cache to recognize multiple versions of documents with the same URL, cache session-aware pages, and cache pages that contain personalized information. There are also rules for pages that require personalized content assembly of dynamic Edge Side Includes (ESI) fragments.
OracleAS Web Cache provides dynamic assembly of pages with both cacheable and non-cacheable page fragments. It does this by enabling pages to be broken down into fragments of differing cacheablilty profiles. With partial-page caching, more HTML content can be cached, then assembled and delivered by OracleAS Web Cache when requested.
The basic structure used to create dynamic content is a template page containing HTML fragments. The template consists of common elements, such as the "look and feel" elements of the page. The HTML fragments represent dynamic subsections of the page. The template page is associated with the URL that end users request. The template page uses the Edge Side Includes (ESI) markup language to tell OracleAS Web Cache to fetch and include the HTML fragments. Each individual fragment is a separate object with its own caching policy. ESI can be used with HTML, XML, and any Web publishing technology. ESI is an open standard. For more information, see http://www.esi.org
.
For JSP applications, OC4J supports JESI. JESI is a specification and custom JSP tag library that developers can use to automatically generate ESI code using JSP syntax. Even though JSP developers can always use ESI, JESI provides an even easier way for JSP developers to express the modularity of pages and the cacheablilty of those modules, without requiring developers to learn a new syntax. In addition, Oracle JDeveloper provides the ESI Servlet filter extension, which enables developers to create JSPs with ESI or JESI tags, and test them within the development environment.
Oracle Application Server Web Cache includes workload management features like surge protection, load balancing, failover, session binding, cache consistency options, and clustering to enhance application availability and ensure quality of service.
OracleAS Web Cache passes requests for non-cacheable, stale, or missing objects to application servers. To prevent an overload of requests on the application servers, OracleAS Web Cache has a surge protection feature that enables you to set a limit on the number of concurrent requests that the application servers can handle. When the limit is reached, subsequent requests are queued. If the queue is full, then OracleAS Web Cache rejects the request and serves a site busy error page to the Web browser that initiated the request.
Load balancing and failover allows Web sites to be built with a collection of servers for better scalability and reliability. OracleAS Web Cache sends requests to the application server with the most available load using its load balancing feature. When an application server becomes unavailable, OracleAS Web Cache automatically performs backend failover. OracleAS Web Cache distributes the load over the remaining application servers and polls the failed application server for its status until it is back online. When the failed server returns to operation, OracleAS Web Cache includes it in the load distribution.
Additionally, you can configure OracleAS Web Cache solely for the purpose of providing load balancing. With this option, you replace the hardware load balancer with one or more caches that do not cache content.
OracleAS Web Cache enables you to bind user sessions to a given application server in order to maintain state for a period of time. An application binds user sessions by including session data in the HTTP header or body it sends to Web browsers in such a way that the browser is forced to include it with its next request. This data is transferred between the application server and the browser through OracleAS Web Cache.
OracleAS Web Cache supports invalidation as a way to ensure that its cache stays valid with respect to the content being served. Administrators and developers can invalidate cache content by either sending an invalidation message to the computer running OracleAS Web Cache or by assigning an expiration time limit to the cached documents.
OracleAS Web Cache provides several features for ensuring consistency between the cache and application servers, including:
Invalidation and Expiration: OracleAS Web Cache supports invalidation as a way to ensure that its cache stays valid with respect to the content being served. Administrators and developers can invalidate cache content by either sending an invalidation message to the computer running OracleAS Web Cache or by assigning an expiration time limit to the cached documents. Expiration is useful for documents where the frequency of content changes is predictable and regular, such as standard images or templates.
HTTP Cache Validation: OracleAS Web Cache uses HTTP/1.1 validation models to determine how to best serve a response to browsers. Validation works by the comparing two validators, one in the request header and the other in the cached object's response header, to determine if they represent the same or different entities.
Performance Assurance Heuristics: To handle performance issues while maintaining cache consistency, OracleAS Web Cache uses built-in performance assurance heuristics that enable it to assign a queue order to documents. These heuristics determine which documents can be served stale and which documents must be refreshed immediately.
An OracleAS Cluster (Web Cache) is a loosely related set of Web cache instances working together to provide a single logical cache. You can configure multiple instances of OracleAS Web Cache to run as members of an OracleAS Cluster (Web Cache). This increases the availability and scalability of your caching solution.
Oracle Application Server Web Cache also includes performance monitoring functionality that provides valuable insight into end-user service levels.
Oracle Application Server Web Cache includes instrumentation for end-user performance monitoring. Administrators can configure OracleAS Web Cache to measure end-user response times for individual URLs, sets of URLs, or entire Web-based applications, regardless of whether the URLs are cached. For each instrumented request, the complete user experience is recorded. The raw measurements are collected in the OracleAS Web Cache access logs.
The following sections describe other important features of Oracle Application Server Web Cache.
As part of Oracle Application Server, OracleAS Web Cache can be configured to use SSL and to work with SSL acceleration cards or third-party SSL acceleration appliances. OracleAS Web Cache supports applications that require client-side SSL certificates for PKI-based authentication. For HTTPS requests that require client-side certificates, the client browser sends its certificate to OracleAS Web Cache during the SSL handshake. The cache forwards the request to Oracle HTTP Server along with the client's certificate information inserted in special HTTP request headers. Oracle HTTP Server recognizes the headers and is able to pass user credentials to Oracle Application Server Single Sign-On for authentication. OracleAS Web Cache can also perform SSL termination, and provide caching for applications that use mod_osso
.
Administrators now have better control over the granularity of multi-version caching rules based on user-agent request headers, or browser types. Previously, you could either cache one version of a page for all browsers, or you could cache one version for each browser type and version. Now, you can customize the caching rules to define groups of browsers that will share a cached version of a page. For example, you could cache one page for all versions the Internet Explorer, one for all versions of Netscape, and one for all other browsers.
In addition to managing Oracle HTTP Server and OC4J processes, OPMN now manages the cache and administration server processes for OracleAS Web Cache. These include the start, stop, and auto-restart operations. However, standalone OracleAS Web Cache deployments will continue to use the OracleAS Web Cache Control and "watchdog" process management utilities.
OracleAS Web Cache provides an inline invalidation mechanism as an additional way to manage content freshness. The inline invalidation model is implemented as part of the OracleAS Web Cache ESI support, and provides a useful way for origin servers to include invalidation messages along with transactional responses sent to Web Cache. The ability to send invalidation messages inline reduces the connection overhead associated with sending invalidations separately.
Another new invalidation feature for this release is support for search key invalidation. Previously, a cached document was identified by a URL-based cache key. Invalidation requests needed to specify either exact URLs or a set of URLs and headers matching a regular expression in order to invalidate cached objects. In this release, OracleAS Web Cache invalidation has been extended to support search keys. Cached objects can now be associated with multiple application-specified search keys, with the URL-based key being the primary key. Invalidation can be based on the search keys instead of the primary URL-based key, making invalidation easier for administrators and application developers to use.
Web application developers may also encounter situations when application objects are not HTML or XML fragments; they may have to deal with XML DOM objects or Java serializable objects. There may also be requirements to reuse or post-process cached content, or maintain intermediate results. Oracle Application Server provides two components for dealing with application level caching:
These two cache offerings can be used independently as well as together to provide enhanced caching capabilities.
Java Object Cache is a set of Java classes designed to manage Java objects within a process, across processes, and on local disks. Java Object Cache provides a powerful and flexible service that improves server performance by managing local copies of objects that are expensive to retrieve or create. There are no restrictions on the type of object that can be cached or the original source of the object. The management of each object in the cache is easily customized. Each object has a set of attributes associated with it to control such things as how the object is loaded into the cache, where the object is stored, how it is invalidated, and who should be notified when the object is invalidated. Objects can be invalidated as a group or individually.
Web Object Cache is a Web-application-level caching facility that is embedded and maintained within a Java Web application. The Web Object Cache is a hybrid cache, both Web-based and object-based. Using the Web Object Cache, applications can cache programmatically using API calls (for servlets) or custom tag libraries (for JSPs). The Web Object Cache is generally used as a complement to the Web cache. By default, the Web Object Cache uses the Java Object Cache as its repository.