Oracle® Application Server High Availability Guide
10g Release 2 (10.1.2) B14003-03 |
|
Previous |
Next |
This chapter describes how to perform configuration changes and on-going maintenance for the Oracle Application Server middle-tier.
This chapter covers the following topics:
Section 4.1, "Middle-tier High Availability Configuration Overview"
Section 4.3, "Availability Considerations for the DCM Configuration Repository"
Section 4.4, "Using Oracle Application Server Clusters (OC4J)"
Section 4.5, "Managing OracleAS Cold Failover Cluster (Middle-Tier)"
Section 4.6, "Managing Oracle Application Server Middle-tier Upgrades"
Section 4.7, "Using OracleAS Single Sign-On with OracleAS Cluster (Middle-Tier)"
Oracle Application Server provides different configuration options to support high availability for the Oracle Application Server middle-tier.
This section covers the following topics:
When administering a DCM-Managed OracleAS Cluster, you use either Application Server Control Console or dcmctl
commands to manage and configure common configuration information on one Oracle Application Server instance. DCM then propagates and replicates the common configuration information across all Oracle Application Server instances within the DCM-Managed OracleAS Cluster. The common configuration information for the cluster is called the cluster-wide configuration.
Note: There is configuration information that can be configured individually, per Oracle Application Server instance within a cluster (these configuration options are also called instance-specific parameters). |
Each Oracle Application Server instance in a DCM-Managed OracleAS Cluster has the same base configuration. The base configuration contains the cluster-wide configuration and excludes instance-specific parameters.
This section describes how to create and use a DCM-Managed OracleAS Cluster. This section covers the following topics:
Section 4.2.2, "Adding Instances to DCM-Managed OracleAS Clusters"
Section 4.2.3, "Removing Instances from DCM-Managed OracleAS Clusters"
Section 4.2.4, "Starting, Stopping, and Deleting DCM-Managed OracleAS Clusters"
Section 4.2.5, "Rolling Upgrades for Stateful J2EE Applications"
Section 4.2.6, "Configuring Oracle HTTP Server Options for DCM-Managed OracleAS Clusters"
Section 4.2.7, "Understanding DCM-Managed OracleAS Cluster Membership"
See Also: Distributed Configuration Management Administrator's Guide for information ondcmctl commands
|
An OracleAS Farm contains a collection of Oracle Application Server instances. In an OracleAS Farm, you can view a list of all application server instances when you start Application Server Control Console. The application server instances shown in the Standalone Instances area on the Application Server Control Console Farm Home Page are available to be added to DCM-Managed OracleAS Clusters.
Each Oracle Application Server Farm uses either a File-Based Repository or a Database-Based Repository. The steps for associating an application server instance with an OracleAS Farm depends on the type of the repository.
This section covers the following:
Section 4.2.1.1, "Associating an Instance with an OracleAS Database-based Farm"
Section 4.2.1.2, "Associating an Instance with an OracleAS File-based Farms"
Section 4.2.1.3, "Using the Application Server Control Console Create Cluster Page"
Note: This section covers procedures for clusterable middle-tier instances that are part of an OracleAS Farm. For purposes of this section, a clusterable instance is a middle-tier instance, where thedcmctl isclusterable command returns the value true .
|
If you have not already done so during the Oracle Application Server installation process, you can associate an application server instance with an OracleAS Database-based Farm:
For an OracleAS Database-based Farm, do the following to add an Oracle Application Server instance to the OracleAS Farm:
Navigate to the Application Server Control Console Instance Home Page.
In the Home area, select the Infrastructure link and follow the instructions for associating an application server instance with an Oracle Application Server Infrastructure.
This section covers the following topics:
Section 4.2.1.2.1, "Creating an OracleAS File-based Farm Repository Host"
Section 4.2.1.2.2, "Adding Instances to an OracleAS File-based Farm"
You can instruct the Oracle Application Server installer to create an OracleAS File-based Farm when you install Oracle Application Server. If you did not create an OracleAS File-based Farm during installation, then you can create the OracleAS File-based Farm with the following steps.
Using the Application Server Control Console for the instance that you want to use as the repository host, select the Infrastructure link to navigate to the Infrastructure page. If a repository is not configured, then the Farm Repository field shows "Not Configured", as shown in Figure 4-1.
Figure 4-1 Application Server Control Console Farm Repository Management
On the Infrastructure page, in the OracleAS Farm Repository Management area, select the Configure button to start the Configure OracleAS Farm Repository wizard. The hostname appears under Configure Oracle Farm Repository Source.
Select the New file-based repository option and click Next, as shown in Figure 4-2.
Figure 4-2 Application Server Control Console: Create Repository Wizard Step 1
The wizard jumps to Step 4 of 4, Validation, as shown in Figure 4-3.
Figure 4-3 Application Server Control Console Create Repository Wizard Step 4
Select Finish, and Oracle Application Server creates the OracleAS File-based Farm.
When the wizard completes, note the Repository ID shown in the OracleAS Farm Repository Management area on the Infrastructure page. You need to use the Repository ID to add instances to the OracleAS File-based Farm.
When you go to the Application Server Control Console Home page, notice that the home page shows the OC4J instance and the Oracle HTTP Server are stopped, and the page now includes a Farm link in the General area.
To add standalone application server instances to an OracleAS File-based Farm, perform the following steps:
Obtain the Repository ID for the OracleAS File-based Farm that you want to join. To find the Repository ID, on any Oracle Application Server instance that uses the OracleAS File-based Farm, click the Infrastructure link, and check the value of the File-based Repository ID field in the OracleAS Farm Repository Management area.
Switch to the Application Server Control Console for the standalone instance that you want to add to the OracleAS File-based Farm and click the Infrastructure link. If a repository is not configured, then the Farm Repository field shows "Not Configured", as shown in Figure 4-1.
Click Configure to start the Configure OracleAS Farm Repository wizard. The repository creation wizard appears (Figure 4-2). The host name appears in the OracleAS Instance field under the Configure Oracle Farm Repository Source area.
Select the Existing file-based repository button and click Next. The repository creation wizard displays the Location page, Step 3 of 4 (Figure 4-4).
Figure 4-4 Application Server Control Console Add Instance to Farm
Enter the repository ID for the Repository Host and click Next.
In the step 4 of 4 page, Configure OracleAS Farm Repository Validation, click Finish. When the wizard completes, the standalone instance joins the OracleAS File-based Farm.
After the wizard completes, you return to the Application Server Control Console Infrastructure page.
Using the Application Server Control Console Farm Home Page, you can create a new DCM-Managed OracleAS Cluster.
From the Farm Home page, create a new DCM-Managed OracleAS Cluster as follows:
Click the Farm link to navigate to the Farm Home Page.
Note: Application Server Control Console shows the Farm Home Page when an Oracle Application Server instance is part of a farm. |
Click Create Cluster to display the Create Cluster page (Figure 4-5).
Enter a name for the new cluster and click Create. Cluster names within a farm must be unique.
A confirmation page appears.
Click OK to return to the Farm Home Page.
The Farm Home page shows the cluster in the Clusters area. The new cluster is empty and does not include any Oracle Application Server instances. Use the Join Cluster button on the Farm Home page to add instances to the cluster.
To add Oracle Application Server instances to a DCM-Managed OracleAS Cluster, do the following:
Navigate to the Farm Home Page. To navigate to the Farm Home page from an Oracle Application Server instance Home page, select the link next to the Farm field in the General area on the Home page.
Note: If the Farm field is not shown, then the instance is not part of a Farm and you will need to associate the standalone instance with a Farm. |
Select the radio button for the Oracle Application Server instance that you want to add to a cluster from the Standalone Instances section.
Click Join Cluster.
Figure 4-6 shows the Join Cluster page.
Select the radio button of the cluster that you want the Oracle Application Server instance to join.
Click Join. Application Server Control Console adds the instance to the selected cluster and then displays a confirmation page.
Click OK to return to the Farm Home Page.
Repeat these steps for each additional standalone instance you want to add to the cluster.
Note the following when adding Oracle Application Server instances to a DCM-Managed OracleAS Cluster:
The order in which you add Oracle Application Server instances to a DCM-Managed OracleAS Cluster is significant. The first instance that joins the DCM-Managed OracleAS Cluster serves as the base configuration for all additional instances that join the cluster. The base configuration includes all cluster-wide configuration information. It does not include instance-specific parameters.
After the first Oracle Application Server instance joins the DCM-Managed OracleAS Cluster, the base configuration overwrites existing cluster-wide configuration information for subsequent Oracle Application Server instances that join the cluster. Each additional Oracle Application Server instance, after the first, that joins the cluster inherits the base configuration specified for the first Oracle Application Server instance that joined the cluster.
Before adding an Oracle Application Server instance to a DCM-Managed OracleAS Cluster, Application Server Control Console stops the instance. You can restart the Oracle Application Server instance by selecting the cluster link, selecting the appropriate instance from within the cluster, and then clicking the Start button.
Application Server Control Console removes an Oracle Application Server instance from the Standalone Instances area when the instance joins a DCM-Managed OracleAS Cluster.
To add multiple standalone Oracle Application Server instances to a DCM-Managed OracleAS Cluster in a single operation, use the dcmctl
joinCluster
command.
When an Oracle Application Server instance contains certain Oracle Application Server components, it is not clusterable. Use the dcmctl
isClusterable
command to test if an instance is clusterable. If the instance is not clusterable, then Application Server Control Console returns an error when you attempt to add the instance to a DCM-Managed OracleAS Cluster.
All Oracle Application Server instances that are to be members of a DCM-Managed OracleAS Cluster must be installed on the same flavor operating system. For example, different variants of UNIX are clusterable together, but they are not clusterable with Windows systems.
Note: For adding instances to an OracleAS File-based Farm, where the instances will be added to an DCM-Managed OracleAS Cluster, there is no known fixed upper limit on the number of instances; a DCM-Managed OracleAS Cluster of 12 instances has been tested successfully. |
To remove an Oracle Application Server instance from a cluster, do the following:
Select the cluster in which you are interested on the Farm Home Page. This displays the cluster page.
Select the radio button of the Oracle Application Server instance to remove from the cluster and click Remove.
Repeat these steps for each Oracle Application Server instance that you want to remove.
Note the following when removing Oracle Application Server instances from a DCM-Managed OracleAS Cluster:
Before Application Server Control Console removes an Oracle Application Server instance from a cluster, it stops the instance. After the operation completes, you can restart the Oracle Application Server instance from the Standalone Instances area of the Farm Home Page.
The dcmctl
leaveCluster
command removes one Oracle Application Server instance from the cluster at each invocation.
When the last Oracle Application Server instance leaves a cluster, cluster-wide configuration information associated with the cluster is removed. The cluster is now empty and the base configuration is not set. Subsequently, Oracle Application Server uses the first Oracle Application Server instance that joins the cluster as the base configuration for all additional Oracle Application Server instances that join the cluster.
You can remove any Oracle Application Server instance from a cluster at any time. The first instance to join a cluster does not have special properties. The base configuration is created from the first instance to join the cluster, but this instance can be removed from the cluster in the same manner as the other instances.
Figure 4-7 shows the Application Server Control Console Farm Home Page, including two clusters, cluster1 and cluster2.
Figure 4-7 Oracle Application Server 10g Farm Page
Table 4-1 lists the cluster control options available on the Farm Home Page.
Table 4-1 Oracle Application Server Farm Page Options
If you want to... | Then... |
---|---|
Start all Oracle Application Server instances in a DCM-Managed OracleAS Cluster |
Select the radio button next to the cluster and click Start. |
Restart all Oracle Application Server instances in an DCM-Managed OracleAS Cluster |
Select the radio button next to the cluster and click Restart. |
Stop all Oracle Application Server instances in an DCM-Managed OracleAS Cluster |
Select the radio button next to the cluster and click Stop. |
Delete a DCM-Managed OracleAS Cluster Oracle Application Server instances that are in the cluster are removed from the cluster and become standalone instances in the Farm. |
Select the radio button next to the cluster and click Delete. |
HttpSession replication has become a popular feature among Oracle Application Server users. OracleAS Cluster (OC4J) takes care of replicating the HttpSession information to the instances that participate in the cluster and enables the failover of requests between nodes in a manner transparent to the application user. However, this high availability feature is affected by the normal lifecycle of applications. Every time you redeploy an application, the deployment process triggers the reload of the application classes in the instances in the cluster and causes the HttpSession to be lost. The maintenance of session information across deployments may generate some inconsistencies depending on the logic included in the latest version of the application. It is up to the code in the application that the session information is treated in a different way from the previous deployment. However, in many cases, the changes in the code of an application do not affect the way the session information is processed inside the business logic. Additionally, and due to the possible criticality of the information, it may be required to maintain the data added to the session by the users across deployments.
This section describes how to use OracleAS Clusters so that the deployments of new versions of an application do not imply the loss of HttpSession information. The procedure is based on the consecutive ("rolling") update of the application in the different instances that form the cluster.
One of the benefits of using OracleAS Clusters managed through file-based or database-based repositories (also known as DCM-managed clusters) is that the creation of a cluster through DCM automatically triggers the propagation of configuration across all the participants in the cluster. This enables the deployment of applications to all the Oracle Application Server instances in a cluster in a single step. DCM propagates the EAR or WAR file and replicates the configuration to all the nodes that are part of that cluster.
This configuration replication feature is commonly misunderstood and associated with the replication of "runtime state" in a cluster. Configuration replication is achieved with DCM. Replication of HttpSession state in a J2EE deployment is achieved through IP multicast and serialization mechanisms in OracleAS Cluster (OC4J).
The creation of OracleAS Cluster through Application Server Control triggers both mechanisms: configuration replication with DCM and HttpSession replication between the OC4J containers that "reside" in the Oracle Application Server instances that are part of the cluster. The session replication mechanism, however, is totally independent of the configuration replication. Only three configuration parameters are needed to enable the replication of the HttpSession object across OC4J instances:
the multicast address and port
the island definition
the "distributable" tag in the web.xml
deployment descriptor for the application
Based on this, it is possible to participate in an "HttpSession replication group" without belonging to the same configuration group. The procedure described in this section (intended to maintain state across redeployments of application to an OracleAS Cluster) is based on the separation of these two concepts. The procedure does not apply to OracleAS Cluster (OC4J) that is manually configured. For this type of cluster, the deployment is done node by node. In these cases, the session will be maintained automatically as long as there is enough time (between the deployments to each instance in the cluster) for replication to take place.
The procedure described later in this section assumes the following scenario:
One OracleAS Farm containing two Oracle Application Server instances (I1 and I2) configured in one cluster (cluster1) sharing state (that is, configured inside the same OC4J island). This cluster can be managed either through a file-based or database-based repository.
One application ("app") is deployed on this cluster. The application is deployed on one OC4J instance (OC4J_Clustered). The application is already deployed and is at version 0. Some state is being stored in the HttpSession. The goal is to deploy a new version of the application (version 1) while maintaining the state. The two Oracle HTTP Servers in the Oracle Application Server instances are load-balanced by an external load balancer (typically OracleAS Web Cache or a third-party hardware load balancer).
Figure 4-8 shows the initial scenario.
Using Application Server Control Console, stop OC4J and Oracle HTTP Server in one of the instances (I2, for example) in the cluster. This ensures that all requests will get routed to the one single node that will maintain the state.
Using Application Server Control Console, remove the stopped instance (I2) from the cluster.
In Application Server Control Console, click "Cluster1".
Select the "I2" instance.
Click Remove.
At this point, although you have removed I2 from the cluster and stopped the OC4J_Clustered component in I2, the replication configuration still remains the same and the two instances are still sharing the same IP multicast group.
Figure 4-9 shows the configuration at this point.
Figure 4-9 Instance I2 Stopped and Removed from Cluster1
Using Application Server Control Console, deploy a new version of the application to the I2 instance, which is no longer in the cluster. See Figure 4-10.
Figure 4-10 Instance I2 with New Version of the Application
Create a separate cluster (Cluster2) and add the standalone instance (I2) to it. See Figure 4-11.
Figure 4-11 Instance I2 Added to a New Cluster, Cluster2
Start the I2 instance in the second cluster (Cluster2). At this time, because the instances are still using the same IP multicast group and same island definition for the OC4J instance, replication between the two separate clusters is still taking place. See Figure 4-12.
Stop the I1 instance in the first cluster (Cluster1).
Remove the I1 instance from the first cluster (Cluster1).
Figure 4-13 shows the scenario at this point.
Figure 4-13 Instance I1 Stopped and Removed from Cluster1
Join instance I1, which is stopped, to the second cluster (Cluster2). DCM will automatically deploy the new version of the application (version 1) in the joining instance (I1).
Figure 4-14 Instance I1 Added to Cluster2, App Is now Version 1
Start the first instance I1.
Figure 4-15 Both Instances Running and Using the New Version of the Application
You now have two nodes with the new version of the application. The HttpSession information has been maintained across deployments. No state has been lost in the process.
To automate the procedure above, you can use DCM scripts in each of the involved instances. The following scripts provide a sample implementation and can be customized with the following parameters:
appname - the name of the application being deployed and updated
app.ear - the full path location to the new EAR file being deployed
instance_n - the name of the first Oracle Application Server instance that will be updated
last_instance - the name of the last Oracle Application Server instance that will be updated
cluster1 - the name of the cluster where the application is originally deployed
cluster2 - the name of the cluster used for session migration
dcmscript_instance_n
stop -i instance_n -ct OC4J stop -i instance_n -co HTTP_Server -ct HTTP_Server leaveCluster -i instance_n undeployApplication -a appname deployApplication -f app.ear -a appname joinCluster -cl cluster2 -i instance_n start -i instance_n
dcmscript_last_instance
stop -i last_instance leaveCluster -i last_instance joinCluster -cl cluster2 -i last_instance start -i instance_1
The procedure involves running first dcmscript_instance_n
in each of the instances that participate in the cluster and that will be updated before the last instance. After this is done in all the instances except the last one, run dcmscript_last_instance
from the last instance. To run these scripts:
> cd ORACLE_HOME/dcm/bin > dcmctl shell -f filename
where filename is the name of the DCM script (dcmscript_instance_n
or dcmscript_last_instance
).
For future stateful deployments, the operations would require to switch roles between the original cluster and the session holder cluster. This can be achieved by creating a couple of additional scripts.
dcmscript_instance_n_odd_run
stop -i instance_n -ct OC4J stop -i instance_n -co HTTP_Server -ct HTTP_Server leaveCluster -i instance_n undeployApplication -a appname deployApplication -f app.ear -a appname joinCluster -cl cluster1 -i instance_n start -i instance_n
dcmscript_last_instance_odd_run
stop -i last_instance leaveCluster -i last_instance joinCluster -cl cluster1 -i last_instance start -i instance_1
The next deployment with state would require running the first set of scripts, the following deployment would use the odd_run
scripts, and so on.
If the cluster contains more than two instances, it is necessary to move the instances between the two clusters in the same way I1 was moved in the example above (you would have to apply steps 6 through 9 for each one of the instances in the cluster).
cluster1 and cluster2 switch roles in subsequent deployments. After you finish moving the last instance as described in the procedure, simply leave the empty cluster for future use.
If the new version of the application modifies the signature of the objects that are added to the session (for example, it added new attributes to these objects), the container will automatically trigger the load of the new version of the classes involved in the cluster. This means that the session will be lost whenever the signature is changed. Other modifications to the servlets, JSPs, and classes included in the redeployment will be assimilated by the instances without any loss in the HttpSession state.
This section describes Oracle HTTP Server options for DCM-Managed Oracle Application Server Clusters.
This section covers the following:
Section 4.2.6.1, "Using and Configuring mod_oc4j Load Balancing"
Section 4.2.6.2, "Configuring Oracle HTTP Server Instance-Specific Parameters"
Section 4.2.6.3, "Configuring mod_plsql With Real Application Clusters"
Using DCM-Managed OracleAS Clusters, the Oracle HTTP Server module mod_oc4j load balances requests to OC4J processes. The Oracle HTTP Server, using mod_oc4j configuration options, supports different load balancing policies. By specifying load balancing policies DCM-Managed OracleAS Clusters provide performance benefits along with failover and high availability, depending on the network topology and host machine capabilities.
By default, mod_oc4j uses weights to select a node to forward a request to. Each node uses a default weight of 1. A node's weight is taken as a ratio compared to the weights of the other available nodes to define the number of requests the node should service compared to the other nodes in the DCM-Managed OracleAS Cluster. Once a node is selected to service a particular request, by default, mod_oc4j uses the roundrobin
policy to select OC4J processes on the node. If an incoming request belongs to an established session, the request is forwarded to the same node and the same OC4J process that started the session.
The mod_oc4j load balancing policies do not take into account the number of OC4J processes running on a node when calculating which node to send a request to. Node selection is based on the configured weight for the node, and its availability.
To modify the mod_oc4j load balancing policy, use the Oc4jSelectMethod
and Oc4jRoutingWeigh
t configuration directives in the mod_oc4j.conf
file.
Using Application Server Control Console, configure the mod_oc4j.conf
file as follows:
Select the HTTP_Server component from the System Components area of an instance home page.
Click the Administration link on the HTTP_Server page.
Click the Advanced Server Properties link on the Administration page.
On the Advanced Server Properties page, select the mod_oc4j.conf link from the Configuration Files area.
On the Edit mod_oc4j.conf page, within the <IfModule mod_oc4j.c>
section, add or edit the directives Oc4jSelectMethod
and Oc4jRoutingWeight
to select the desired load balancing option.
See Also:
|
You can modify the Oracle HTTP Server ports and listening addresses on the Server Properties Page, which can be accessed from the Oracle HTTP Server Home Page. You can modify the virtual host information by selecting a virtual host from the Virtual Hosts section on the Oracle HTTP Server Home Page.
Table 4-3 shows the Oracle HTTP Server instance-specific parameters.
This section covers the following:
Using Oracle HTTP Server with the mod_plsql
module, if a database becomes unavailable, the connections to the database need to be detected and cleaned up. This section explains how to configure mod_plsql
to detect and cleanup dead connections.
The mod_plsql
module maintains a pool of connections to the database and reuses established connections for subsequent requests. If there is no response from a database connection, mod_plsql
detects this case, discards the dead connection, and creates a new database connection for subsequent requests.
By default, when a Real Application Clusters node or a database instance goes down and mod_plsql
previously pooled connections to the node or instance, the first mod_plsql
request that uses a dead connection in its pool results in a failure response of HTTP-503 that is sent to the end-user. The mod_plsql
module processes this failure and uses it to trigger detection and removal of all dead connections in the connection pool. The mod_plsql
module pings all connection pools that were created before the failure response. This ping operation is performed at the time of processing for the next request that uses a pooled connection. If the ping operation fails, the database connection is discarded, and a new connection is created and processed.
Setting the PlsqlConnectionValidation
parameter to Automatic
causes the mod_plsql
module to test all pooled database connections that were created before a failed request. This is the default configuration.
Setting the PlsqlConnectionValidation
parameter to AlwaysValidate
causes mod_plsql
to test all pooled database connections before issuing any request. Although the AlwaysValidate
configuration option ensures greater availability, it also introduces additional performance overhead.
You can specify the timeout period for mod_plsql
to test a bad database connection in a connection pool. The PlsqlConnectionTimeout
parameter, which specifies the maximum time mod_plsql
should wait for the test request to complete before it assumes that a connection is not usable.
See Also: Oracle Application Server mod_plsql User's Guide |
Oracle Net clients can use a Directory Server to look up connect descriptors. At the beginning of a request, the client uses a connect identifier to the Directory Server where it is then resolved into a connect descriptor.
The advantage of using a Directory Server is that the connection information for a server can be centralized. If the connection information needs to be changed, either because of a port change or a host change, the new connection information only needs to be updated once, in the Directory Server, and all Oracle Net clients using this connection method will be able to connect to the new host.
See Also: Oracle Database Net Services Administrator's Guide for instructions on configuring Directory Naming. |
After a DCM-Managed OracleAS Cluster is created, you can add Oracle Application Server instances to it. This section describes DCM-Managed OracleAS Cluster configuration and the characteristics of clusterable Oracle Application Server instances.
This section covers the following topics:
Section 4.2.7.1, "How the Common Configuration Is Established"
Section 4.2.7.2, "Parameters Excluded from the Common Configuration: Instance-Specific Parameters"
The order in which Oracle Application Server instances are added to the DCM-Managed OracleAS Cluster is significant. The common configuration that will be replicated across the DCM-Managed OracleAS Cluster is established by the first Oracle Application Server instance added to the cluster. The configuration of the first Oracle Application Server instance added is inherited by all Oracle Application Server instances that subsequently join the DCM-Managed OracleAS Cluster.
The common configuration includes all cluster-wide configuration information— namely, DCM-Managed OracleAS Cluster and Oracle Application Server instance attributes, such as components configured. For example, if the first Oracle Application Server instance to join the cluster has four OC4J instances, then the common configuration includes those four OC4J instances and the applications deployed on them. OC4J Instances that subsequently join the DCM-Managed OracleAS Cluster replicate the OC4J instances and their deployed applications. (In addition, when the Oracle Application Server instance joins the DCM-Managed OracleAS Cluster, DCM removes any OC4J components that do not match the common configuration). Furthermore, changes to one Oracle Application Server instance in the DCM-Managed OracleAS Cluster, such as adding new OC4J instances or removing OC4J instances, are replicated across the DCM-Managed OracleAS Cluster; the components configured are part of the replicated cluster-wide, common configuration.
When the last Oracle Application Server instance leaves a DCM-Managed OracleAS Cluster, the DCM-Managed OracleAS Cluster becomes an empty DCM-Managed OracleAS Cluster, and the next Oracle Application Server instance to joins the DCM-Managed OracleAS Cluster provides a new common configuration for the DCM-Managed OracleAS Cluster.
Some parameters only apply to a given Oracle Application Server instance or computer; these parameters are instance-specific parameters. DCM does not propagate instance-specific parameters to the Oracle Application Server instances in a DCM-Managed OracleAS Cluster. When you change an instance-specific parameter, if you want the change to apply across the DCM-Managed OracleAS Cluster, you must apply the change individually to each appropriate Oracle Application Server instance.
Table 4-2 OC4J Instance-specific Parameters
Table 4-3 Oracle HTTP Server Instance-Specific Parameters
Parameter | Description |
---|---|
ApacheVirtualHost |
Specific to a computer. |
Listen |
Specific to a computer. This directive binds the server to specific addresses or ports. |
OpmnHostPort |
Specific to a computer. |
Port |
Specific to a computer. This directive specifies the port to which the standalone server listens. |
User |
Specific to a computer. |
Group |
Specific to a computer. |
NameVirtualHost |
Specific to a computer. This directive specifies the IP address on which the server receives requests for a name-based virtual host. This directive can also specify a port. |
ServerName |
Specific to a computer. This directive specifies the host name that the server should return when creating redirection URLs. This directive is used if |
PerlBlob |
Specific to a computer. |
Table 4-4 OPMN Instance-Specific Parameters
This section covers availability considerations for the DCM configuration repository, and covers the following topics:
Section 4.3.1, "Availability Considerations for DCM-Managed OracleAS Cluster (Database)"
Section 4.3.2, "Availability Considerations for DCM-Managed OracleAS Cluster (File-based)"
Note: The availability of the configuration repository only affects the Oracle Application Server configuration and administration services. It does not affect the availability of the system for handling requests, or availability of the applications running in a DCM-Managed OracleAS Cluster. |
This section covers availability considerations for the DCM configuration repository when using DCM-Managed OracleAS Clusters with an OracleAS Database-based Farm.
Using an OracleAS Database-based Farm with a Real Application Clusters database or other database high availability solution protects the system by providing high availability, scalability, and redundancy during failures of DCM configuration repository database.
See Also: The Oracle Database High Availability Architecture and Best Practices guide for a description of Oracle Database high availability solutions. |
Using an OracleAS File-based Farm, the DCM configuration repository resides on one Oracle Application Server instance at any time. A failure of the host that contains the DCM configuration repository requires manual failover (by migrating the repository host to another host).
This section covers availability considerations for the DCM configuration repository when using DCM-Managed OracleAS Clusters with an OracleAS File-based Farm.
Section 4.3.2.1, "Selecting the Instance to Use for a OracleAS File-based Farm Repository Host"
Section 4.3.2.2, "Protecting Against the Loss of a Repository Host"
Section 4.3.2.4, "Impact of Non-Repository Host Unavailability"
Section 4.3.2.5, "Updating and Checking the State of Local Configuration"
Section 4.3.2.6, "Performing Administration on a DCM-Managed OracleAS Cluster"
Section 4.3.2.8, "Best Practices for Managing Instances in OracleAS File-based Farms"
Note: The information in this section does not apply to a DCM-Managed Oracle Application Server Cluster that uses a OracleAS Database-based Farm (with the repository type, database). |
An important consideration for using DCM-Managed OracleAS Clusters with a OracleAS File-based Farm is determining which Oracle Application Server instance is the repository host.
Consider the following when selecting the repository host for an OracleAS File-based Farm:
When the repository host instance is temporarily unavailable, a DCM-Managed OracleAS Cluster that uses a OracleAS File-based Farm is still able to run normally, but it cannot update any configuration information.
Because the Oracle Application Server instance that is the repository host instance stores and manages the DCM-Managed OracleAS Cluster configuration information in its file system, the repository host instance should use mirrored or RAID disks. Disk mirroring improves the availability of the DCM-Managed OracleAS Cluster.
When the repository host instance is not available, read-only configuration operations are not affected on any Oracle Application Server instances that are running. The OracleAS Farm cluster-wide configuration information is distributed and managed through local Java Object Cache.
When the repository host instance is not available, operations that attempt to change configuration information in the file-based repository will generate an error. These operations must be delayed until the repository host instance is available, or until the repository host instance is relocated to another application server instance within the OracleAS File-based Farm.
Using an OracleAS File-based Farm, one instance in the farm is designated as the repository host. The repository host holds configuration information for all instances in the OracleAS File-based Farm. Access to the repository host is required for all configuration changes, write operations, for instances in the OracleAS File-based Farm. However, instances have local configuration caches to perform read operations, where the configuration is not changing.
In the event of the loss of the repository host, any other instance in the OracleAS File-based Farm can take over as the new repository host if an exported copy of the old repository hosts is available. You should make regular backups of the repository host, and save the backups on a separate system.
When the repository host is unavailable, only read-only operations are allowed. No configuration changes are allowed. If an operation is attempted that requires updates to the repository host, such as use of the updateConfig
command, dcmctl
reports an error message. For example:
ADMN-100205 Base Exception: The DCM repository is not currently available. The OracleAS 10g instance, "myserver.mydomain.com", is using a cached copy of the repository information. This operation will update the repository, therefore the repository must be available.
If the repository host is permanently down, or unavailable for the long-term, then the repository host should be relocated. If the restored repository is not recent, local instance archives can be applied to bring each instance up to a newer state.
When the instances in a DCM-Managed OracleAS Cluster, other than the repository host instance, are down, all other instances can still function properly. If an instance is experiencing a short-term outage, the instance automatically updates its configuration information when it becomes available again.
If an instance is permanently lost, this will have no affect on other instances in the OracleAS File-based Farm. However, to maintain consistency, it will be necessary to delete all records pertaining to the lost instance.
To delete configuration information for a lost instance, use the following command:
> dcmctl destroyInstance
It is important that all configuration changes complete successfully, and that all instances in a cluster are "In Sync". The local configuration information must match the information stored in the repository. DCM does not know about manual changes to configuration files, and such changes could make the instances in a cluster have an In Sync status of false.
Use the following dcmctl
command to return a list of all managed components with their In Sync status:
> dcmctl getState -cl cluster_name
The In Sync status of true implies that the local configuration information for a component is the same as the information that is stored in the repository.
If you need to update the file-based repository with changed, local information, use the dcmctl
command updateConfig
, as follows,
> dcmctl updateconfig
> dcmctl getstate -cl cluster_name
Use the resyncInstance
command to update local information with information from the repository. For example:
> dcmctl resyncinstance
By default this command only updates configuration information for components whose In Sync status is false. Use the -force
option to update all components, regardless of their In Sync status.
During planned administrative downtimes, with a DCM-Managed OracleAS Cluster using an OracleAS File-based Farm that runs on multiple hosts with sufficient resources, you can perform administrative tasks while continuing to handle requests. This section describes how to relocate the repository host in a DCM-Managed OracleAS Cluster, while continuing to handle requests.
These procedures are useful for performing administrative tasks on a DCM-Managed OracleAS Cluster, such as the following:
Relocating the repository for repository host node decommission.
Applying required patches to the DCM-Managed OracleAS Cluster.
Applying system upgrades, changes, or patches that require a system restart for a host in the DCM-Managed OracleAS Cluster.
Note: Using the procedures outlined in this section, only administration capabilities are lost during a planned downtime. |
Use the following steps to relocate the repository host in a DCM-Managed OracleAS Cluster.
Issue the following DCM command, on UNIX systems:
> cd $ORACLE_HOME/dcm/bin
> dcmctl exportRepository -f file
On Windows systems:
> cd %ORACLE_HOME%\dcm\bin
> dcmctl exportRepository -f file
Note: After this step, do not perform configuration or administration commands that would change the configuration. Otherwise those changes will not be copied when the repository file is imported to the new repository host. |
Stop the administrative system, including Enterprise Manager and the DCM daemon in each instance of the OracleAS File-based Farm, except for the instance that is going to be the new repository host.
On UNIX systems use the following commands on each instance in the cluster:
> $ORACLE_HOME/bin/emctl stop iasconsole > $ORACLE_HOME/opmn/bin/opmnctl stopproc ias-component=dcm-daemon
On Windows systems use the following commands on each instance in the cluster:
> %ORACLE_HOME%\bin\emctl stop iasconsole > %ORACLE_HOME%\opmn\bin\opmnctl stopproc ias-component=dcm-daemon
At this point, the DCM-Managed OracleAS Cluster can still handle requests.
Import the saved repository on the host that is to be the repository host instance.
On UNIX systems, use the following commands:
> cd $ORACLE_HOME/dcm/bin/
> dcmctl importRepository -file filename
On Windows systems, use the following commands:
> cd %ORACLE_HOME%\dcm\bin\
> dcmctl importRepository -file filename
filename is the name of the file you specified in the exportRepository
command.
While importRepository
is active, the DCM-Managed OracleAS Cluster can still handle requests.
Note: TheimportRepository command issues a prompt that specifies that the system that is the currently hosting the repository must be shutdown. However, only the dcm-daemon on the system that is currently hosting the repository must be shutdown, and not the entire system.
|
Use the following command to start all components on the new repository host. Do not perform administrative functions at this time.
On UNIX systems:
> $ORACLE_HOME/opmn/bin/opmnctl startall
On Windows systems:
> %ORACLE_HOME%\opmn\bin\opmnctl startall
On the system that was the repository host, indicate that the instance is no longer the host by issuing the following command,
> dcmctl repositoryRelocated
Start Application Server Control Console on the new repository host instance. The repository has now been relocated, and the new repository instance now handles requests.
On UNIX systems use the following commands on each instance in the cluster:
> $ORACLE_HOME/bin/emctl start iasconsole
On Windows systems use the following commands on each instance in the cluster:
> %ORACLE_HOME%\bin\emctl start iasconsole
Shut down the Oracle Application Server instance associated with the old repository host, using the following commands:
On UNIX systems:
> $ORACLE_HOME/opmn/bin/opmnctl stopall
On Windows systems:
> %ORACLE_HOME%\opmn\bin\opmnctl startall
You can now perform the required administrative tasks on the old repository host system, such as the following.
Applying required patches to the repository host system in the DCM-Managed OracleAS Cluster
Decommissioning the node
Applying system upgrades, changes, or patches that require a system restart for the DCM-Managed OracleAS Cluster
After completing the administrative tasks on the system that was the repository host, if you want to switchback the repository host, you need to perform these steps again.
When you export repository files and archives, keep the files in known locations and back up the exports and archives regularly. It is also recommended that exported repositories be available to non-repository instances, not only as a backup means but also for availability. If the repository instance becomes unavailable, a new instance can become the new repository host but only if an exported repository file is available.
Oracle Application Server does not provide an automated repository backup procedure. However, to assure that you can recover from loss of configuration data, you need to put a repository backup plan in place. Perform repository backups regularly and frequently, and perform a repository backup after any configuration changes or topology changes where instances are added or removed.
Save repository backups to different nodes that are available to other nodes in the OracleAS File-based Farm.
When you add or remove an instance from an OracleAS File-based Farm, all the managed processes on that instance are stopped. If you want the instance to be available, then after performing the leave or join farm operation, restart the instance.
It is recommended that you back up the local configuration before leaving or joining an OracleAS File-based Farm. For example, use the following commands to create and export an archive:
> dcmctl createarchive -arch myarchive -comment "Archive before leaving farm" > dcmctl exportarchive -arch myarchive -f /archives/myarchive
Archives are portable across OracleAS File-based Farm. When an instance joins a new farm it can apply archives created on a previous farm.
This section describes OracleAS Cluster (OC4J) configuration and the use of OracleAS Cluster (OC4J) with DCM-Managed OracleAS Clusters.
OracleAS Cluster (OC4J) enables Web applications to replicate state and provides for high availability and failover for applications that run under OC4J. You can configure this feature without using a DCM-Managed OracleAS Cluster. However, when you use both of these Oracle Application Server features together, this simplifies and improves manageability and high availability. This section assumes that you are using both Oracle Application Server Cluster (OC4J) and DCM-Managed OracleAS Cluster.
This section covers the following:
Cluster-Wide Configuration Changes and Modifying OC4J Instances
Configuring OC4J Instance-SpecificParameters
See Also: Oracle Application Server Containers for J2EE User's Guide for detailed information on configuring OC4J instances |
OracleAS Cluster (OC4J) enables Web applications to replicate state, and provides for high availability and failover for applications that run on OC4J. In a DCM-Managed OracleAS Cluster, Oracle Application Server instances and OC4J instances have the following properties:
Each Oracle Application Server instance has the same cluster-wide configuration. When you use Application Server Control Console or dcmctl
to modify any cluster-wide OC4J parameters, the modifications are propagated to all Oracle Application Server instances in the cluster. To make cluster-wide OC4J configuration changes, you change the configuration parameters on a single Oracle Application Server instance. Oracle Application Server then propagates the modifications to all the other Oracle Application Server instances within the cluster.
When you modify any instance-specific parameters on an OC4J instance that is part of a DCM-Managed OracleAS Cluster, the change is not propagated across the DCM-Managed OracleAS Cluster. Changes to instance-specific parameters are only applicable to the specific Oracle Application Server instance where the change is made. Because different hosts running Oracle Application Server instances could each have different capabilities, such as total system memory, it may be appropriate for the OC4J processes within an OC4J instance to run with different configuration options.
Table 4-5 provides a summary of OC4J instance-specific parameters. Other OC4J parameters are cluster-wide parameters and are replicated across DCM-Managed OracleAS Clusters.
Table 4-5 OC4J Instance-Specific Parameters Summary for DCM-Managed OracleAS Cluster
This section covers the following topics:
Section 4.4.2.1, "Creating or Deleting OC4J Instances in an OracleAS Cluster (OC4J)"
Section 4.4.2.2, "Deploying Applications on an OracleAS Cluster (OC4J)"
Section 4.4.2.3, "Configuring Web Application State Replication with OracleAS Cluster (OC4J)"
Section 4.4.2.4, "Configuring EJB Application State Replication with OracleAS Cluster (OC4J-EJB)"
Section 4.4.2.5, "Configuring Stateful Session Bean Replication for OracleAS Cluster (OC4J-EJB)s"
See Also: Oracle Application Server Containers for J2EE User's Guide for details on OC4J configuration and application deployment |
You can create a new OC4J instance on any Oracle Application Server instance within a DCM-Managed OracleAS Cluster, and the OC4J instance will be propagated to all Oracle Application Server instances across the cluster.
To create an OC4J instance, do the following:
Navigate to any application server instance within the DCM-Managed Oracle Application Server Cluster.
Select Create OC4J Instance under the System Components area. This displays the Create OC4J instance page.
Enter a name in the OC4J Instance name field.
Select Create.
Oracle Application Server creates the instances and then DCM propagates the new OC4J instance across the DCM-Managed OracleAS Cluster.
A new OC4J instance is created with the name you provided. This OC4J instance shows up on each application server instance across the cluster, in the System Components section.
To delete an OC4J instance, select the checkbox next to the OC4J instance you wish to delete, then select Delete OC4J Instance. DCM propagates the OC4J removal across the cluster.
In DCM-Managed OracleAS Cluster, when you deploy an application to one application server instance, the application is propagated to all application server instances across the cluster.
To deploy an application across a cluster, do the following:
Select the cluster you want to deploy the application to.
Select any application server instance from within the cluster.
Select an OC4J instance on the application server instance where you want to deploy the application.
Deploy the application to the OC4J instance using either Application Server Control Console or dcmctl
commands.
DCM then propagates the application across the DCM-Managed Oracle Application Server Cluster.
See Also: Oracle Application Server Containers for J2EE User's Guide for details on deploying applications to an OC4J instance |
To assure that Oracle Application Server maintains, across DCM-Managed OracleAS Cluster, the state of stateful Web applications you need to configure state replication for the Web applications.
To configure state replication for stateful Web applications, do the following:
Select the Administration link on the OC4J Home Page.
Select the Replication Properties link in the Instance Properties area.
Scroll down to the Web Applications section. Figure 4-16 shows this section.
Figure 4-16 Web State Replication Configuration
Select the Replicate session state checkbox.
Optionally, you can provide the multicast host IP address and port number. If you do not provide the host and port for the multicast address, it defaults to host IP address 230.0.0.1 and port number 9127. The host IP address must be between 224.0.0.2 through 239.255.255.255. Do not use the same multicast address for both HTTP and EJB multicast addresses.
Note: When choosing a multicast address, ensure that the address does not collide with the addresses listed in:
Also, if the low order 23 bits of an address is the same as the local network control block, 224.0.0.0 – 224.0.0.255, then a collision may occur. To avoid this problem, provide an address that does not have the same bits in the lower 23 bits of the address as the addresses in this range. |
Add the <distributable/>
tag to all web.xml
files in all Web applications. If the Web application is serializable, you must add this tag to the web.xml
file.
The following shows an example of this tag added to web.xml
:
<web-app> <distributable/> <servlet> ... </servlet> </web-app>
Note: For sessions to be replicated to a just-started instance that joins a running cluster, for example, where sessions are already being replicated between instances, the web module in the application maintaining the session has to be configured with theload-on-startup flag set to true . This is a cluster-wide configuration parameter. See Figure 4-17 for details on setting this flag.
|
Figure 4-17 Application Server Control Console Properties Page for Setting Load on Startup
To create an EJB cluster, also known as OracleAS Cluster (OC4J-EJB), you specify the OC4J instances that are to be involved in the cluster, configure each of them with the same multicast address, username, and password, and deploy the EJB, which is to be clustered, to each of the nodes in the cluster.
EJBs involved in a OracleAS Cluster (OC4J-EJB) cannot be sub-grouped in an island. Instead, all EJBs within the cluster are in one group. Also, only session beans are clustered.
The state of all beans is replicated at the end of every method call to all nodes in the cluster using a multicast topic. Each node included in the OracleAS Cluster (OC4J-EJB) is configured to use the same multicast address.
The concepts for understanding how EJB object state is replicated within a cluster are described in the Oracle Application Server Containers for J2EE Enterprise JavaBeans Developer's Guide.
To configure EJB replication, do the following:
Click the Administration link on the OC4J Home Page.
Click the Replication Properties link in the Instance Properties area.
In the EJB Applications section, select the Replicate State checkbox.
Figure 4-18 shows this section.
Figure 4-18 EJB State Replication Configuration
Provide the username and password, which is used to authenticate itself to other hosts in the OracleAS Cluster (OC4J-EJB). If the username and password are different for other hosts in the cluster, they will fail to communicate. You can have multiple username and password combinations within a multicast address. Those with the same username/password combinations are considered a unique cluster.
Optionally, you can provide the multicast host IP address and port number. If you do not provide the host and port for the multicast address, it defaults to host IP address 230.0.0.1 and port number 9127. The host IP address must be between 224.0.0.2 through 239.255.255.255. Do not use the same multicast address for both Web Application and EJB multicast addresses.
Note: When choosing a multicast address, ensure that the address does not collide with the addresses listed in:
Also, if the low order 23 bits of an address is the same as the local network control block, 224.0.0.0 – 224.0.0.255, then a collision may occur. To avoid this, provide an address that does not have the same bits in the lower 23 bits of the address as the addresses in this range. |
Configure the type of EJB replication in the orion-ejb-jar.xml
file within the JAR file. See Section 4.4.2.5, "Configuring Stateful Session Bean Replication for OracleAS Cluster (OC4J-EJB)s" for details. You can configure these within the orion-ejb-jar.xml
file before deployment or add this through the Application Server Control Console screens after deployment. To add this after deployment, drill down to the JAR file from the application page.
For stateful session beans, you may have to modify the orion-ejb-jar.xml
file to add the state replication configuration. Because you configure the replication type for the stateful session bean within the bean deployment descriptor, each bean can use a different type of replication.
Stateful session beans require state to be replicated among nodes. In fact, stateful session beans must send all their state between the nodes, which can have a noticeable effect on performance. Thus, the following replication modes are available to you to decide on how to manage the performance cost of replication:
The state of the stateful session bean is replicated to all nodes in the cluster, with the same multicast address, at the end of each EJB method call. If a node loses power, then the state has already been replicated.
To use end of call replication, set the replication
attribute of the <session-deployment>
tag in the orion-ejb-jar.xml
file to "endOfCall
".
For example,
<session-deployment replication="EndOfCall" .../>
The state of the stateful session bean is replicated to only one other node in the cluster, with the same multicast address, when the JVM is terminating. This is the most performant option, because the state is replicated only once. However, it is not very reliable for the following reasons:
The state is not replicated if the power is shut off unexpectedly. The JVM termination replication mode does not guarantee state replication in the case of lost power.
The state of the bean exists only on a single node at any time; the depth of failure is equal to one node.
To use JVM termination replication, set the replication
attribute of the <session-deployment>
tag in the orion-ejb-jar.xml
file to "VMTermination
".
For example,
<session-deployment replication="VMTermination" .../>
This section covers the instance-specific parameters that are not replicated across DCM-Managed OracleAS Clusters. This section covers the following:
Section 4.4.3.1, "Configuring OC4J Islands and OC4J Processes"
Section 4.4.3.2, "Configuring Port Numbers and Command Line Options"
See Also: Oracle Application Server Containers for J2EE User's Guide for details on OC4J configuration and application deployment |
To provide a redundant environment and to support high availability using DCM-Managed OracleAS Clusters, you need to configure multiple OC4J processes within each OC4J instance.
In DCM-Managed OracleAS Cluster, state is replicated in OC4J islands with the same name within OC4J instances and across instances in the DCM-Managed OracleAS Cluster. To assure high availability, with stateful applications, OC4J island names within an OC4J instance must be the same in corresponding OC4J instances across the DCM-Managed OracleAS Cluster. It is your responsibility to make sure that island names match where session state replication is needed in a DCM-Managed OracleAS Cluster.
The number of OC4J processes on an OC4J instance within a DCM-Managed OracleAS Cluster is an instance-specific parameter because different hosts running application server instances in the DCM-Managed OracleAS Cluster could each have different capabilities, such as total system memory. Thus, it could be appropriate for a DCM-Managed OracleAS Cluster to contain application server instances that each run different numbers of OC4J processes within an OC4J instance.
To modify OC4J islands and the number of processes each OC4J island contains, do the following:
Click the Administration link on the OC4J Home Page of the application server instance of interest in the DCM-Managed OracleAS Cluster.
Click Server Properties in the Instance Properties area.
Scroll down to the Multiple VM Configuration section (Figure 4-19). This section defines the islands and the number of OC4J processes that should be started on this application server instance in each island.
Figure 4-19 OC4J instance Island and Number of Processes Configuration
Create any islands for this OC4J instance within the cluster by clicking Add Another Row. You enter a name for each island in the Cluster(OC4J) Name field. In the Number of Processes field, you designate how many OC4J processes should be started within each island.
Figure 4-20 shows the section where you can modify OC4J ports and set command-line options.
To modify OC4J ports or command-line options, do the following:
Click the Administration link on the OC4J Home Page of the Oracle Application Server instance of interest in the cluster.
Click Server Properties in the Instance Properties area.
Scroll down to the Multiple VM Configuration section. This section defines the ports and the command line options for OC4J and for the JVM that runs OC4J processes.
Figure 4-20 shows the Ports and Command line options areas on the Server Properties page.
Figure 4-20 OC4J Ports and Command Line Options Configuration
This section provides instructions for managing OracleAS Cold Failover Cluster (Middle-Tier). Using OracleAS Cold Failover Cluster (Middle-Tier) provides cost reductions for a highly available system, as compared to a fully available active-active middle-tier system. In addition, some applications may not function properly in an active-active OracleAS Cluster environment (for example, an applications that relies on queuing or other synchronous methods). In this case, using an OracleAS Cold Failover Cluster (Middle-Tier) provides for high availability using the existing applications without modifications.
This section covers the following topics:
Section 4.5.2, "Managing Failover for OracleAS Cold Failover Cluster (Middle-Tier)"
Section 4.5.3, "Moving Oracle Homes Between Local and Shared Storage"
Terminology Notes
This section uses the term "separate Oracle home installation" to mean OracleAS Cold Failover Cluster (Middle-Tier) installations where you place the Oracle home for the middle tier on the local storage of each node.
The term "single Oracle home installation" means OracleAS Cold Failover Cluster (Middle-Tier) installations where you place the Oracle home for the middle tier on the shared storage.
In separate Oracle home installations, any application deployment or configuration change needs to be applied to both nodes of the OracleAS Cold Failover Cluster (Middle-Tier). This is an administrator responsibility for the administrator managing the OracleAS Cold Failover Cluster (Middle-Tier) environment.
This section covers the following:
In separate Oracle home installations, any applications deployed or any configuration changes made to the middle-tier installation should be made on both nodes of the cold failover cluster. This needs to be ensured by the administrator managing the environment.
Application deployment is applied, as with any other middle-tier environment. To deploy applications on the passive node, bring up the node and then deploy the application. For the J2EE installation of OC4J and Oracle HTTP Server, the application deployment is like any other multiple middle-tier environment. The passive node can be brought up during the deployment phase and the application deployment can be done on this node. Similarly, applications can be deployed on the active node.
For single Oracle home installations, application deployments and configuration changes need to done only on the current node.
For separate Oracle home installations, you should back up both nodes of the OracleAS Cold Failover Cluster (Middle-Tier). The procedure for this remains the same as for any other middle tier and is documented in the Oracle Application Server Administrator's Guide. Each installation needs to be backed up. During restoration, each backup can only be restored to the host it was backed up from. It should not be restored to the other node.
For single Oracle home installations, the middle tier backup and restore operations need to be done from just the current active node.
For separate Oracle home installations, to monitor or manage a node using Application Server Control Console, log in to the console using the physical hostname of the current active node. The Application Server Control Console processes can be up and running on both nodes of the cluster simultaneously. When changes to the environment are made, including configuration changes or application deployments, perform the changes on both nodes of the OracleAS Cold Failover Cluster (Middle-Tier).
For single Oracle home installations, to monitor or manage a OracleAS Cold Failover Cluster (Middle-Tier) deployment using Application Server Control Console, log in to the console using the virtual hostname.
In OracleAS Cold Failover Cluster (Middle-Tier), a failure in the active node, or a decision to stop the active node and fail over to the passive node, requires that you make the formerly passive node active (perform a failover operation).
The failover management itself can be performed using either of the following failover processes:
Automated using a cluster manager facility. The cluster manager offers services, which uses packages to monitor the state of a service. If the service or the node is found to be down, it automatically fails over the service from one node to the other node.
Manual failover. In this case, perform the manual failover steps as outlined in this section. Because both the detection of the failure and the failover itself is performed manually, the system may be unavailable for a longer period using manual failover.
This section covers the following topics:
Section 4.5.2.1, "Manual Failover for OracleAS Cold Failover Cluster (Middle-Tier)"
Section 4.5.2.3, "Manual Failover of Components for OracleAS Cold Failover Cluster (Middle-Tier)"
Section 4.5.2.4, "Manual Failover of OracleAS Cluster (OC4J-JMS)"
For single Oracle home installations, the failover process to make the formerly passive node the new active node includes the following steps:
Stop all middle-tier services on the currently active node, if the node is still available.
Fail over the virtual IP to the new active node.
Fail over the shared disk on which the shared Oracle home resides and fail over the components to the new active node.
Start the middle-tier services on the new active node.
For separate Oracle home installations, the failover process to make the formerly passive node the new active node includes the following steps:
Stop all middle-tier services on current active node, if the node is still available.
Fail over the components to the new active node.
Start the middle-tier services on the new active node.
Note: The failover process requires that you previously performed the post-installation steps that set up and configure the OracleAS Cold Failover Cluster (Middle-Tier), as outlined in the Oracle Application Server Installation Guide for your platform. |
Perform the following steps to fail over the virtual IP in OracleAS Cold Failover Cluster (Middle-Tier):
Stop all Oracle Application Server processes on the failed node, if possible.
On UNIX systems:
> $ORACLE_HOME/opmn/bin/opmnctl stopall
> On Windows systems,
%ORACLE_HOME%\opmn\bin\opmnctl stopall
Stop Oracle Application Server Administration processes on the failed node, if possible. On UNIX systems:
> $ORACLE_HOME/bin/emctl stop iasconsole > $ORACLE_HOME/bin/emctl stop agent
On Windows systems,
> %ORACLE_HOME%\bin\emctl stop iasconsole > %ORACLE_HOME%\bin\opmnctl stop agent
Fail over the virtual IP from the failed node to the new active node.
On Sun SPARC Solaris:
If the failed node is usable, login as root and execute the following command (on the failed node):
# ifconfig <interface_name> removeif <virtual_IP>
Login as root on the new active node and execute the command:
# ifconfig <interface_name> addif <virtual_IP> up
On Linux:
If the failed node is usable, login as root on the failed node and execute the following command:
# /sbin/ifconfig <interface_name> down
Login as root on the new active node and execute the command:
# ifconfig <interface_name> netmask <netmask> <virtual_IP> up
On Windows:
On the failed node, move the group that was created using Oracle Fail Safe as follows:
Start up Oracle Fail Safe Manager.
Right-click the group that was created during the Oracle Application Server middle-tier installation and select "Move to different node".
Note: If OracleAS JMS is using file-persistence, fail over the shared disk as well. |
After performing the failover steps for the virtual IP on the new active node, perform the following steps to fail over on the OracleAS Cold Failover Cluster (Middle-Tier) system. Perform the following steps to stop and start Oracle Application Server processes:
Stop Oracle Application Server processes on the new active node and start OPMN only.
Execute the following commands on UNIX systems:
> $ORACLE_HOME/opmn/bin/opmnctl stopall > $ORACLE_HOME/opmn/bin/opmnctl start
Stop Oracle Application Server Administration processes on the new active node, using the following commands
On UNIX systems:
> $ORACLE_HOME/bin/emctl stop iasconsole > $ORACLE_HOME/bin/emctl stop agent
On Windows systems,
> %ORACLE_HOME%\bin\emctl stop iasconsole > %ORACLE_HOME%\bin\opmnctl stop agent
On the current active node, execute the following commands.
On UNIX systems:
> $ORACLE_HOME/opmn/bin/opmnctl stopall > $ORACLE_HOME/opmn/bin/opmnctl startall
On Windows systems:
> %ORACLE_HOME%\opmn\bin\opmnctl stopall > %ORACLE_HOME%\opmn\bin\opmnctl startall
If you use Application Server Control Console, start Oracle Application Server Administration processes on the current active node using the following commands.
On UNIX systems:
> $ORACLE_HOME/bin/emctl start agent > $ORACLE_HOME/bin/emctl start iasconsole
On Windows systems,
> %ORACLE_HOME%\bin\emctl start agent > %ORACLE_HOME%\bin\opmnctl start iasconsole
If you are using OracleAS Cluster (OC4J-JMS), and the system fails abnormally, you may need to perform additional failover steps such as removing lock files for OracleAS JMS file-based persistence.
See Also: Abnormal Termination in the Oracle Application Server Containers for J2EE Services Guide section, "Oracle Application Server JMS" |
In OracleAS Cold Failover Cluster (Middle-Tier) topologies, you can install the Oracle home for the middle tier on a shared storage (called "single Oracle home installations"), or you can install separate Oracle homes for the middle tier on the local storage of each node (called "separate Oracle home installations"). This section describes how to move the Oracle home from one storage type to the other.
To move from a single Oracle home on the shared storage to separate Oracle homes on the local disks
Create a new separate Oracle home-based OracleAS Cold Failover Cluster (Middle-Tier) topology by following the steps in the Oracle Application Server Installation Guide. The middle-tier type can be the same as or different from the original single home OracleAS Cold Failover Cluster (Middle-Tier).
Redo any changes made to the configuration of the original single-home OracleAS Cold Failover Cluster (Middle-Tier) topology.
Redeploy any applications deployed in the original single-home OracleAS Cold Failover Cluster (Middle-Tier) topology.
Deinstall the single-home OracleAS Cold Failover Cluster (Middle-Tier) instance.
To move from separate Oracle homes on the local storage to a single Oracle home on the shared storage
Note that OracleAS Wireless is not supported on single Oracle home configurations. This means that the source and destination Oracle homes should not have OracleAS Wireless configured.
If you want to retain one of the original Oracle homes:
Move the Oracle home you want to retain to a shared storage. Ensure that the Oracle home path remains the same. If this is not possible, then this migration option cannot be used.
Re-run the chgtocfmt
conversion script on this instance to convert it to a single Oracle home installation. The -n
option should not be specified in this run.
Deinstall the unused instance from the original separate Oracle home.
If you do not want to retain the original Oracle homes:
Create a new single Oracle home-based OracleAS Cold Failover Cluster (Middle-Tier) topology by following the steps in the Oracle Application Server Installation Guide. The middle-tier type installed here can be the same as or different from the original Oracle homes.
Redo any changes made in the configuration original OracleAS Cold Failover Cluster (Middle-Tier).
Redeploy any applications deployed in the original OracleAS Cold Failover Cluster (Middle-Tier).
Deinstall the original separate Oracle homes.
Applications deployed on OracleAS Cold Failover Cluster (Middle-Tier) should be accessed using the virtual hostname. For the applications to work, they should not be dependent in any way on the local physical host on which they are running. To ensure continuity after failover, any resources required by the application should be made available on the failover node.
All external access for the application (or any other Oracle product such as OracleAS Integration) should use the virtual hostname. The published URL for the application should use the virtual hostname.
When you upgrade systems in a high availability environment, your goal should be to upgrade all Oracle Application Server instances to the same version—in this case, Oracle Application Server 10g (10.1.2). Running all of the Oracle Application Server instances at the same version level is not mandatory; however, doing so makes it easier to manage, troubleshoot, and maintain J2EE applications and the Oracle Application Server components.
If you choose to maintain previous versions of Oracle Application Server, you must consider which combinations of versions are supported. See the Oracle Application Server Upgrade and Compatibility Guide for details.
This section covers the following topics:
When you perform an upgrade operation, you need to upgrade Oracle Application Server instances in a specific order to avoid unsupported or unstable configurations. See the Oracle Application Server Upgrade and Compatibility Guide for details.
In DCM-Managed OracleAS Clusters, each instance joined to the cluster must use the same Oracle Application Server version. Before you upgrade instances in a DCM-Managed OracleAS Cluster, you need to do the following:
Remove the Oracle Application Server instance from the DCM-Managed OracleAS Cluster using either Application Server Control Console, or the dcmctl
leavecluster
command.
Ensure that any old instance archives that need to be retained are exported to the file system using the DCM exportarchive
command. The upgrade procedure does not upgrade archives. These archives can then be re-imported after the upgrade process.
After completing the upgrades for all the Oracle Application Server instances that are part of the DCM-Managed OracleAS Cluster, you can then join the instances into a new DCM-Managed OracleAS Cluster.
See Also: Section 4.3.2.6, "Performing Administration on a DCM-Managed OracleAS Cluster" for information on how to minimize downtime while upgrading instances in a DCM-Managed OracleAS Cluster that uses an OracleAS File-based Farm. |
You can upgrade OC4J applications running in an OracleAS Cluster (OC4J) that use HTTPSession to store state with no session loss. See Section 4.2.5, "Rolling Upgrades for Stateful J2EE Applications" for details.
To enable OracleAS Single Sign-On with OracleAS Cluster, the OracleAS Single Sign-On server needs to be aware of the entry point into the OracleAS Cluster, which is commonly the load balancer in front of the Oracle HTTP Servers. Usually, this is OracleAS Web Cache, a network load balancer appliance, or Oracle HTTP Server.
To register an OracleAS Cluster's entry point with the OracleAS Single Sign-On server, use the ssoreg.sh
script (ssoreg.bat
on Windows).
In order to use OracleAS Single Sign-On functionality, all Oracle HTTP Server instances in an OracleAS Cluster must have an identical OracleAS Single Sign-On registration.
Each Oracle HTTP Server is registered with the same OracleAS Single Sign-On server.
Each Oracle HTTP Server redirects a success, logout, cancel, or home message to the network load balancer. In an OracleAS Cluster, each Oracle HTTP Server should redirect message URLs to the network load balancer. Because clients cannot access Oracle HTTP Server directly, they interact with the network load balancer.
If you do not use a network load balancer, then the OracleAS Single Sign-On configuration must originate with whatever you use as the incoming load balancer (OracleAS Web Cache, Oracle HTTP Server, and so on).
To configure a DCM-Managed OracleAS Cluster for single sign-on, execute the ssoreg.sh
script (ssoreg.bat
on Windows) against one of the Oracle Application Server instances in the DCM-Managed OracleAS Cluster. This tool registers the OracleAS Single Sign-On server and the redirect URLs with all Oracle HTTP Servers in the OracleAS Cluster, and establishes all information necessary to facilitate secure communication between the Oracle HTTP Servers in the OracleAS Cluster and the OracleAS Single Sign-On server.
On one of the Oracle Application Server instances, you define the configuration by running the ssoreg.sh
(ssoreg.bat
on Windows) script. DCM then propagates the configuration to all other Oracle HTTP Servers in the DCM-Managed OracleAS Cluster.
For syntax information, see the section "ssoreg syntax and parameters" in the Oracle Application Server Single Sign-On Administrator's Guide.
Note: When using OracleAS Single Sign-On with Oracle HTTP Servers in the OracleAS Cluster, set theKeepAlive directive to OFF. When the Oracle HTTP Servers are behind a network load balancer, if the KeepAlive directive is set to ON , then the network load balancer maintains state with the Oracle HTTP Server for the same connection, which results in an HTTP 503 error. Modify the KeepAlive directive in the Oracle HTTP Server configuration in the httpd.conf file.
|