Oracle® Application Server High Availability Guide
10g Release 2 (10.1.2) B14003-03 |
|
Previous |
Next |
Oracle Identity Management includes the following components:
Oracle Internet Directory
Oracle Directory Integration and Provisioning
Oracle Delegated Administration Services
OracleAS Single Sign-On
OracleAS Certificate Authority
Decisions to Make
To run these components in a high availability configuration, you have to make these two decisions:
Do you want to run all the Oracle Identity Management components together from the same Oracle home, or install and run them over multiple nodes?
The following sections describe each option:
Do you want to run the components in active-active mode or active-passive mode?
The following sections describe each mode:
Table 8-1 shows possible configurations that result from the two questions above. For example, you can run all Oracle Identity Management components in active-active mode.
Table 8-1 High Availability Configurations for Oracle Identity Management Components
|
Active-active Configuration | Active-Passive Configuration |
---|---|---|
Non-Distributed Model: |
|
|
All Oracle Identity Management Components in the same Oracle home |
Section 8.5, "All Oracle Identity Management Components in Active-Active Configurations" |
Section 8.6, "All Oracle Identity Management Components in Active-Passive Configurations" |
Distributed Model: |
|
|
Oracle Internet Directory and Oracle Directory Integration and Provisioning |
|
|
OracleAS Single Sign-On and Oracle Delegated Administration Services |
|
|
In this model, you install and run all the Oracle Identity Management components on the same Oracle home.
In active-active configurations (also called OracleAS Cluster configurations), you install the components on the local storage of each node. You also need a load balancer in front of these nodes. Requests to these components go to the load balancer, which load balances the requests among the nodes.
In active-passive configurations (also called OracleAS Cold Failover Cluster configurations), you have two nodes in a hardware cluster, and a storage device shared by these nodes. You install the components on the shared storage device. Only one node is active at any time. The other node, called the passive or standby node, becomes active when the active node fails. The passive node then becomes the new active node: it mounts the shared storage device and runs the Oracle Identity Management components.
Installing and managing all the Oracle Identity Management components in the same Oracle home is easier than installing them in a distributed manner.
If you need to install and run some components on nodes that are more secure (located behind additional firewalls), then you need to distribute the Oracle Identity Management components.
You can also install and run the Oracle Identity Management components on separate nodes. A common distribution model is:
OracleAS Single Sign-On and Oracle Delegated Administration Services on one set of computers
Oracle Internet Directory and Oracle Directory Integration and Provisioning on another set of computers
The components are separated in this manner because OracleAS Single Sign-On and Oracle Delegated Administration Services are typically the first components to be accessed directly by clients and other components. You can run these components on computers in the DMZ.
For the Oracle Internet Directory and your databases (including the OracleAS Metadata Repository), you typically run these components on computers located behind an additional firewall because they contain data that you want to secure.
Active-active and active-passive configurations for distributed Oracle Identity Management components are similar to active-active and active-passive configurations for the non-distributed model. The only difference is which components are running in the configuration. For example, instead of all Oracle Identity Management components, you might have only the Oracle Internet Directory component running in an active-active configuration.
Advantages of Distributing Oracle Identity Management Components
Reasons for distributing the Oracle Identity Management components include:
Security: You might want to run some components, typically the Oracle Internet Directory, on computers that are located behind additional firewalls.
Performance: You may get better performance by running the components on multiple computers.
Choice of high availability configuration: You can configure different high availability models for each tier. For example, in the DistributedOracleAS Cold Failover Cluster (Identity Management) Topology, you run OracleAS Single Sign-On and Oracle Delegated Administration Services in an active-active configuration, but run Oracle Internet Directory in an active-passive configuration.
Performance isolation: You can scale each set of components independently of each other. For example, if the bottleneck is in OracleAS Single Sign-On, you can just increase the number of nodes that are running OracleAS Single Sign-On without changing the number of nodes that are running Oracle Internet Directory.
Disadvantages
Multiple installations are required: you need to perform the installations on each node.
You also need to manage, configure, and patch each node separately.
In active-active configurations, you install and run Oracle Identity Management components on multiple nodes. Each node runs the same components as the other nodes.
You need an external load balancer in front of the nodes. Requests to these nodes are directed to the load balancer, which then sends the requests to one of the nodes for processing. The load balancer uses its own algorithm to decide which node to send a request to. See Section 2.2.4.2, "External Load Balancers" for load balancer details.
You configure the load balancer with a virtual server name and port. When clients need to access an Oracle Identity Management component running on the nodes, they use this virtual server name and port.
In active-passive configurations, you have two nodes in a hardware cluster, and a shared storage that can be mounted by either node. You install the Oracle home for the Oracle Identity Management components on the shared storage.
One of the nodes in the hardware cluster is the active node. It mounts the shared storage and runs the Oracle Identity Management components. The other node is the passive, or standby, node. It runs only when the active node fails. In the failover event, the passive node mounts the shared storage and runs the Oracle Identity Management components.
You also need a virtual hostname and virtual IP address to associate with the nodes in the hardware cluster. Clients use this virtual hostname to access the Oracle Identity Management components. During normal operation, the virtual hostname and IP address are associated with the active node. During failover, you make the switch: the virtual hostname and IP address are now associated with the passive node.
In this configuration, you install the Oracle Identity Management components on the local storage of each node. You also need a load balancer in front of these nodes, and you need to configure virtual hostnames for HTTP, HTTPS, LDAP, and LDAPS traffic on the load balancer.
Figure 8-1 Oracle Identity Management Components in an Active-Active Configuration
To access the Oracle Identity Management components, clients send requests to the load balancer, using the appropriate load balancer's virtual hostname. For example, Web clients that need to access OracleAS Single Sign-On or Oracle Delegated Administration Services send their requests using the HTTP virtual hostname. Oracle Internet Directory clients, on the other hand, need to use the LDAP virtual hostname.
OPMN also runs on each node. If an OPMN-managed component fails, OPMN tries to restart it. See Section 2.2.1.1.1, "Automated Process Management with OPMN", which describes OPMN and the components that it manages.
OracleAS Certificate Authority Not Supported
Note that OracleAS Certificate Authority is not supported in an active-active configuration. You can install and run OracleAS Certificate Authority separately.
Topologies that Use This Configuration
OPMN runs on each node to provide process management, monitoring, and notification services for the OC4J_SECURITY
instances, Oracle HTTP Server, and oidmon
processes. (oidmon manages the Oracle Internet Directory processes.) If any of these processes fails, OPMN detects the failure and attempts to restart it. If the restart is unsuccessful, the load balancer detects the failure (usually through a non-response timeout) and directs requests to an active process running on a different node.
For the Oracle Internet Directory component, OPMN monitors oidmon
, which in turn monitors the oidldapd
, oidrepld
, and odisrv
Oracle Internet Directory processes. If oidldapd
, oidrepld
, or odisrv
fails, oidmon
attempts to restart it locally. Similarly, if oidmon
fails, OPMN tries to restart it locally.
Only one odisrv
process and one oidrepld
process can be active at any time in an OracleAS Cluster (Identity Management) while multiple oidldapd
processes can run in the same cluster. Refer to Oracle Internet Directory Administrator's Guide for more details.
If a node fails, the load balancer detects the failure and redirects requests to an active node. Because each node provides identical services as the others, all requests can be fulfilled by the remaining nodes.
You start the Oracle Identity Management components in the following order:
Make sure the OracleAS Metadata Repository database is running.
On each node, perform these steps:
Set the ORACLE_HOME
environment variable to the Oracle Identity Management's Oracle home.
Run OPMN to start the Oracle Identity Management components.
ORACLE_HOME/opmn/bin/opmnctl startall
Start Application Server Control.
ORACLE_HOME/bin/emctl start iasconsole
To stop the Oracle Identity Management components, run the following steps on each node:
Set the ORACLE_HOME
environment variable to the Oracle Identity Management's Oracle home.
Run OPMN to stop the Oracle Identity Management components.
ORACLE_HOME/opmn/bin/opmnctl stopall
Stop Application Server Control.
ORACLE_HOME/bin/emctl stop iasconsole
You can use Application Server Control to manage the Oracle Identity Management components on each node.
In the Application Server Control URL, you use the physical hostname. For example: http://im1.mydomain.com:1156
(assuming Application Server Control is running on port 1156).
You back up files for the Oracle Identity Management components using the OracleAS Backup and Recovery Tool. This tool is described in the Oracle Application Server Administrator's Guide.
In an active-passive (also called a cold failover cluster) configuration (see Figure 8-2), you have two nodes in a hardware cluster. You install the Oracle Identity Management components on the storage shared by these nodes.
Figure 8-2 Oracle Identity Management Components in Active-Passive Configuration
In this configuration, only one node is active at any time. This active node runs all the processes. The other node, called the passive or standby node, runs only when the active node fails or when components on the active node fail to run.
You need to configure a virtual server name and virtual IP address for these nodes in the hardware cluster. The virtual server name and virtual IP address point to the node that is the active node.
The nodes in the hardware cluster also run clusterware that is provided by the hardware vendor. The clusterware monitors the active node to ensure that it is running.
To access the Oracle Identity Management components, clients send requests using the virtual server name.
OPMN also runs on the active node. If an OPMN-managed component fails, OPMN tries to restart it. See Section 2.2.1.1.1, "Automated Process Management with OPMN", which describes OPMN and the components that it manages.
Topologies that Use This Configuration
OPMN runs on the active node to provide process management, monitoring, and notification services for the OC4J_SECURITY
instances, Oracle HTTP Server, and oidmon
processes. (oidmon manages the Oracle Internet Directory processes.) If any of these processes fails, OPMN detects the failure and attempts to restart it. If the restart is unsuccessful, the clusterware detects the failure and fails over all the processes to the passive node.
For the Oracle Internet Directory component, OPMN monitors oidmon
, which in turn monitors the oidldapd
, oidrepld
, and odisrv
Oracle Internet Directory processes. If oidldapd
, oidrepld
, or odisrv
fails, oidmon
attempts to restart it locally. Similarly, if oidmon
fails, OPMN tries to restart it locally.
Only one odisrv
process and one oidrepld
process can be active at any time in an OracleAS Cluster (Identity Management) while multiple oidldapd
processes can run in the same cluster. Refer to Oracle Internet Directory Administrator's Guide for more details.
If a node fails, the clusterware detects the failure and fails over all the processes to the passive node.
Note: While the hardware cluster framework can start, monitor, or fail over OracleAS Infrastructure processes, these actions are not automatic. You have to do them manually, create scripts to automate them, or use scripts provided by the cluster vendor for OracleAS Cold Failover Cluster. |
Perform the following steps to fail over from the active node to the standby node for Solaris systems with a Veritas Volume Manager.
Steps to Perform on the Failed Node
If necessary, stop or kill all Oracle Application Server processes on this node.
Ensure that the file system is not busy. If it is busy, check which processes are using the file system and stop them if required.
Unmount the file system using the following command:
# umount <mount_point>
As root, deport the disk group. For example, if you are using Sun Cluster with Veritas Volume Manager, deport the disk group using the following commands:
# vxdg deport <disk_group_name>
If the failed node is usable, execute this command to release the virtual IP address:
# ifconfig <interface_name> removeif <virtual_IP>
Steps to Perform on the New Active Node
As root, execute the following command to assign the virtual IP to this node:
# ifconfig <interface_name> addif <virtual_IP> up
As root, import the disk group. For example, if you are using Sun Cluster with Veritas Volume Manager, use the following commands:
# vxdg import <disk_group_name> # vxvol -g <disk_group_name> startall
As root, mount the file system using the following command:
# mount /dev/vx/dsk/<disk_group_name>/<volume_name> <mount_point>
Start all Oracle Application Server processes on this new active node. See Section 8.6.5, "Starting Oracle Identity Management Components".
Figure 8-3, Figure 8-4, and Figure 8-5 show the Oracle Fail Safe Manager screens for a failover operation from the active node to the standby node on Windows.
Figure 8-3 Screen 1 Performing Failover for Oracle Identity Management in an Active-Passive Configuration
Figure 8-4 Screen 2 Performing Failover for Oracle Identity Management in an Active-Passive Configuration
Figure 8-5 Screen 3 Performing Failover for Oracle Identity Management in an Active-Passive Configuration
Perform the following steps to fail over from the active node to the standby node on Linux systems.
Steps to Perform on the Failed Node
Make sure all the Oracle Identity Management processes are stopped on the failed node.
Login as root.
Unmount the file system using the following command:
# umount <mount_point>
If the file system is busy, check which processes are using the file system with the following command:
# fuser -muv <Shared Storage Partition>
Stop the processes, if required, using the following command:
# fuser -k <Shared Storage Partition>
If the failed node is usable, execute the following command to release the virtual IP address:
# ifconfig <interface_name> down
For example,
# ifconfig eth1:1 down
Steps to Perform on the New Active Node
Login as root.
Execute the following command to assign the virtual IP address to this node (the new active node):
# ifconfig <interface_name> netmask <subnet_mask> up
For example,
# ifconfig 144.88.27.125 netmask 255.255.252.0 up
Verify that the virtual IP is up and working using telnet
from a different host (subnet/domain).
Mount the file system using the following command:
# mount <Shared Storage Partition> <mount_point>
For example:
# mount /dev/sdc1 /oracle
Start Oracle Application Server processes on this new active node. See Section 8.6.5, "Starting Oracle Identity Management Components".
You start the Oracle Identity Management components in the following order:
Make sure the OracleAS Metadata Repository database is running.
On the active node:
Set the ORACLE_HOME
environment variable to the Oracle Identity Management's Oracle home.
Run OPMN to start up the Oracle Identity Management components.
ORACLE_HOME/opmn/bin/opmnctl startall
Start up Application Server Control.
ORACLE_HOME/bin/emctl start iasconsole
To stop the processes, run the following steps on the active node:
Set the ORACLE_HOME
environment variable to the Oracle Identity Management's Oracle home.
Run OPMN to stop the Oracle Identity Management components.
ORACLE_HOME/opmn/bin/opmnctl stopall
Stop Application Server Control.
ORACLE_HOME/bin/emctl stop iasconsole
You can use Application Server Control to manage the Oracle Identity Management components on the active node.
In the Application Server Control URL, you use the physical hostname of the active node. For example: http://im1.mydomain.com:1156
(assuming Application Server Control is running on port 1156).
You back up files for the Oracle Identity Management components using the OracleAS Backup and Recovery Tool. This tool is described in the Oracle Application Server Administrator's Guide.
In this configuration, you install Oracle Internet Directory and Oracle Directory Integration and Provisioning components on the local storage of each node. You also need a load balancer in front of these nodes, and you need to configure virtual hostnames for HTTP, HTTPS, LDAP, and LDAPS traffic on the load balancer.
Figure 8-6 Oracle Internet Directory and Oracle Directory Integration and Provisioning in an Active-Active Configuration
To access Oracle Internet Directory, clients send requests to the load balancer, using the load balancer's LDAP virtual hostname.
OPMN also runs on each node. If Oracle Internet Directory or Oracle Directory Integration and Provisioning fails, OPMN tries to restart it. See Section 2.2.1.1.1, "Automated Process Management with OPMN", which describes OPMN and the components that it manages.
Topologies that Use This Configuration
OPMN runs on each node to provide process management, monitoring, and notification services for the oidmon
process.
For the Oracle Internet Directory component, OPMN monitors oidmon
, which in turn monitors the oidldapd
, oidrepld
, and odisrv
Oracle Internet Directory processes. If oidldapd
, oidrepld
, or odisrv
fails, oidmon
attempts to restart it locally. Similarly, if oidmon
fails, OPMN tries to restart it locally.
Only one odisrv
process and one oidrepld
process can be active at any time in an OracleAS Cluster (Identity Management) while multiple oidldapd
processes can run in the same cluster. See the Oracle Internet Directory Administrator's Guide for details.
If OPMN fails to restart oidmon
, or if oidmon
fails to restart the Oracle Internet Directory processes, the load balancer detects the failure (usually through a non-response timeout) and directs requests to an active process running on a different node.
If a node fails, the load balancer detects the failure and redirects requests to an active node. Because each node provides identical services as the others, all requests can be fulfilled by the remaining nodes.
In an OracleAS Cluster (Identity Management), it is necessary to synchronize Oracle Internet Directory metadata—for example, definitions of object classes, attributes, matching rules, ACPs, and password policies—on all the directory server nodes. Figure 8-7 and the accompanying text exemplify the process in which directory server metadata is synchronized between two directory server nodes, Host A and Host B, in an OracleAS Cluster (Identity Management) environment.
Figure 8-7 Metadata Synchronization Process in an OracleAS Cluster (Identity Management) Environment
In the example in Figure 8-7, directory server metadata in an OracleAS Cluster (Identity Management) environment is synchronized as follows:
On Host A, the directory server writes metadata changes to the shared memory on that same host.
OID Monitor on Host A polls the shared memory on that same host. When it discovers a change in the metadata, it retrieves the change.
OID Monitor sends the change to the Oracle Database, which is the repository for the directory server metadata in the OracleAS Cluster (Identity Management) environment.
OID Monitor on Host B polls the Oracle Database for changes in directory server metadata, and retrieves those changes.
OID Monitor on Host B sends the change to the shared memory on that same host.
The directory server on Host B polls the shared memory on that same host for metadata changes. It then retrieves and applies those changes.
In an OracleAS Cluster (Identity Management) environment, the OID Monitor on each node reports to the other nodes that it is running by sending a message to the Oracle Database every 10 seconds. When it does this, it also polls the database server to verify that all other directory server nodes are also running. After 250 seconds, if an OID Monitor on one of the nodes has not reported that it is running, then the other directory server nodes regard it as having failed. At this point, the following occurs on one of the other nodes that are still running:
The OID Monitor on that node brings up the processes that were running on the failed node.
The directory server on that node continues processing the operations that were previously underway on the failed node.
The OID Monitor on that node logs that it has brought up the processes that were previously running on the failed node.
Figure 8-8 and the accompanying text exemplify this process on two hypothetical nodes, Node A and Node B.
As the example in Figure 8-8 shows, the failover process in an OracleAS Cluster (Identity Management) environment follows this process:
Every 10 seconds, the OID Monitor on Node A reports that it is running by sending a message to the database.
The OID Monitor on Node B polls the database to learn which, if any, of the other nodes may have failed.
When OID Monitor on Node B learns that Node A has not responded for 250 seconds, it regards Node A as having failed. It then retrieves from the database the necessary information about the Oracle Internet Directory servers that were running on Node A. In this example, it learns that the directory replication server had been running on Node A.
Because a directory replication server was not already running on Node B, the OID Monitor on Node B starts a directory replication server that corresponds to the directory replication server previously running on Node A.
Note: If Node A, running the directory replication server (oidrepld ) and/or the Oracle Directory Integration and Provisioning (odisrv ), fails, then the OID Monitor on Node B starts these processes on Node B after five minutes. When Node A is restarted, OIDMON on Node A starts the servers automatically and requests the OIDMON on Node B to stop the servers that were started for Node A.
If OIDMON detects a time discrepancy of more than 250 seconds between the two nodes, OIDMON on the node that is behind stops all servers. OIDMON on the node that is ahead automatically starts the servers. To correct this problem, synchronize the time and restart the servers on the node that was behind. |
See Also: "Oracle Internet DirectoryArchitecture" in the chapter "Directory Concepts and Architecture" in Oracle Internet Directory Administrator's Guide for information about directory server nodes, directory server instances, and the kinds of directory metadata stored in the database "Process Control" in the chapter "Directory Administration Tools" in the Oracle Internet Directory Administrator's Guide. |
Note: Normal shutdown is not treated as a failover, that is, after a normal shutdown of Node A, the OID Monitor on Node B does not start these processes on Node B after five minutes. |
Follow the following rules when managing an OracleAS Cluster (Identity Management) environment:
The port numbers (non-SSL port and SSL port) used by the directory servers must be the same on all the nodes and on the external load balancer for Oracle Internet Directory.
Synchronize the time value on all nodes using Greenwich mean time so that there is a discrepancy of no more than 250 seconds between them.
If you change the password to the Oracle Application Server 10g-designated database, then you must update each of the other nodes in the OracleAS Cluster (Identity Management) environment. You change the ODS database user account password using the oidpasswd
utility.
To change the ODS database user password, invoke the following command on one of the Oracle Internet Directory nodes:
oidpasswd connect=db-conn-str change_oiddb_pwd=true
On all other Oracle Internet Directory nodes, invoke the following command to synchronize the password wallet:
oidpasswd connect=db-conn-str create_wallet=true
See Also:
|
You start Oracle Internet Directory and Oracle Directory Integration and Provisioning in the following order:
Make sure the OracleAS Metadata Repository database is running.
On each node:
Set the ORACLE_HOME
environment variable to the directory where you installed Oracle Internet Directory.
Run OPMN to start up Oracle Internet Directory and Oracle Directory Integration and Provisioning.
ORACLE_HOME/opmn/bin/opmnctl startall
Start up Application Server Control.
ORACLE_HOME/bin/emctl start iasconsole
To stop Oracle Internet Directory and Oracle Directory Integration and Provisioning, run the following steps on each node:
Set the ORACLE_HOME
environment variable to the directory where you installed Oracle Internet Directory.
Run OPMN to stop Oracle Internet Directory and Oracle Directory Integration and Provisioning.
ORACLE_HOME/opmn/bin/opmnctl stopall
Stop Application Server Control.
ORACLE_HOME/bin/emctl stop iasconsole
You can use Application Server Control to manage Oracle Internet Directory and Oracle Directory Integration and Provisioning on each node.
In the Application Server Control URL, you use the physical hostname. For example: http://im1.mydomain.com:1156
(assuming Application Server Control is running on port 1156).
You back up files for Oracle Internet Directory and Oracle Directory Integration and Provisioning using the OracleAS Backup and Recovery Tool. This tool is described in the Oracle Application Server Administrator's Guide.
In an active-passive (also called a cold failover cluster) configuration (see Figure 8-9), you have two nodes in a hardware cluster. You install Oracle Internet Directory and Oracle Directory Integration and Provisioning on the storage shared by these nodes.
Figure 8-9 Oracle Internet Directory and Oracle Directory Integration and Provisioning in an Active-Passive Configuration
In this configuration, only one node is active at any time. This active node runs all the processes. The other node, called the passive or standby node, runs only when the active node fails or when components on the active node fail to run.
You need to configure a virtual server name and virtual IP address for these nodes in the hardware cluster. The virtual server name and virtual IP address point to the node that is the active node.
The nodes in the hardware cluster also run clusterware that is provided by the hardware vendor. The clusterware monitors the active node to ensure that it is running.
To access Oracle Internet Directory or Oracle Directory Integration and Provisioning, clients send requests using the virtual server name.
OPMN also runs on the active node. If Oracle Internet Directory or Oracle Directory Integration and Provisioning fails, OPMN tries to restart it. See Section 2.2.1.1.1, "Automated Process Management with OPMN", which describes OPMN and the components that it manages.
Topologies that Use This Configuration
OPMN runs on the active node to provide process management, monitoring, and notification services for the oidmon
process.
For the Oracle Internet Directory component, OPMN monitors oidmon
, which in turn monitors the oidldapd
, oidrepld
, and odisrv
Oracle Internet Directory processes. If oidldapd
, oidrepld
, or odisrv
fails, oidmon
attempts to restart it locally. Similarly, if oidmon
fails, OPMN tries to restart it locally.
Only one odisrv
process and one oidrepld
process can be active at any time in an OracleAS Cluster (Identity Management) while multiple oidldapd
processes can run in the same cluster. Refer to Oracle Internet Directory Administrator's Guide for more details.
If OPMN or oidmon
fails to restart the processes they are monitoring, the clusterware detects the failure and fails over all the processes to the passive node.
If a node fails, the clusterware detects the failure and fails over all the processes to the passive node.
Note: While the hardware cluster framework can start, monitor, or fail over the processes, these actions are not automatic. You have to do them manually, create scripts to automate them, or use scripts provided by the cluster vendor for OracleAS Cold Failover Cluster. |
Perform the following steps to fail over from the active node to the standby node for Solaris systems with a Veritas Volume Manager.
On the failed node:
If necessary, stop or kill all Oracle Internet Directory processes on this node.
Ensure that the file system is not busy. If it is busy, check which processes are using the file system and stop them if required.
Unmount the file system using the following command:
# umount <mount_point>
As root, deport the disk group. For example, if you are using Sun Cluster with Veritas Volume Manager, deport the disk group using the following commands:
# vxdg deport <disk_group_name>
If the failed node is usable, execute this command to release the virtual IP address:
# ifconfig <interface_name> removeif <virtual_IP>
On the new active node:
As root, execute the following command to assign the virtual IP to this node:
# ifconfig <interface_name> addif <virtual_IP> up
As root, import the disk group. For example, if you are using Sun Cluster with Veritas Volume Manager, use the following commands:
# vxdg import <disk_group_name> # vxvol -g <disk_group_name> startall
As root, mount the file system using the following command:
# mount /dev/vx/dsk/<disk_group_name>/<volume_name> <mount_point>
Start Oracle Internet Directory processes on this new active node. See Section 8.8.6, "Starting Oracle Internet Directory / Oracle Directory Integration and Provisioning".
On Windows, you use Oracle Fail Safe to perform the failover. See Section 8.6.3, "Manual Steps for Failover on Windows Systems" for details.
Perform the following steps to fail over from the active node to the standby node on Linux systems.
On the failed node:
Make sure all Oracle Internet Directory processes are stopped on the failed node.
Login as root.
Unmount the file system using the following command:
# umount <mount_point>
If the file system is busy, check which processes are using the file system with the following command:
# fuser -muv <Shared Storage Partition>
Stop the processes, if required, using the following command:
# fuser -k <Shared Storage Partition>
If the failed node is usable, execute the following command to release the virtual IP address:
# ifconfig <interface_name> down
For example,
# ifconfig eth1:1 down
On the new active node:
Login as root.
Execute the following command to assign the virtual IP address to this node (the new active node):
# ifconfig <interface_name> netmask <subnet_mask> up
For example,
# ifconfig 144.88.27.125 netmask 255.255.252.0 up
Verify that the virtual IP is up and working using telnet
from a different host (subnet/domain).
Mount the file system using the following command:
# mount <Shared Storage Partition> <mount_point>
For example:
# mount /dev/sdc1 /oracle
Start Oracle Internet Directory processes on this new active node. See Section 8.8.6, "Starting Oracle Internet Directory / Oracle Directory Integration and Provisioning".
To provide additional availability and scalability, you can use the cold failover technique in conjunction with Oracle Internet Directory Replication. Figure 8-10 illustrates this configuration.
As Figure 8-10 shows, on a two node cluster:
Virtual Host VHA is hosted by Physical Host A.
Virtual Host VHB is hosted by Physical Host B.
Oracle Internet Directory Node 1 is installed and configured on Virtual host VHA.
Oracle Internet Directory Node 2 is installed and configured on Virtual Host VHB.
Both Oracle Internet Directory nodes are configured for multimaster replication.
LDAP applications can do either of the following:
Communicate directly with either Oracle Internet Directory node by using the respective virtual host names for the LDAP host
Load-balance by means of a LAN re-director or another third-party solution that connects to the two hosts on which the Oracle Internet Directory nodes are configured
See Also: "An Oracle Internet Directory Node" in the "Directory Concepts and Architecture" chapter in Oracle Internet Directory Administrator's Guide |
Using cold failover in this way represents an improvement over the simple cold failover configuration. There are two Oracle Internet Directory nodes and the two are in multimaster replication. Oracle Internet Directory is active on both cluster nodes and hence presents an active-active configuration. In contrast to the cold failover-only configuration, which is an active-passive configuration, the Oracle Internet Directory services are actively available on both cluster nodes at any given point in time.
Figure 8-11 shows the cold failover process in conjunction with Oracle directory replication.
Figure 8-11 OracleAS Cold Failover Cluster (Identity Management) in Conjunction with Oracle Directory Replication
As Figure 8-11 shows, when Physical Host A fails or is unavailable because of maintenance downtime, the cluster software fails over virtual host VHA to Physical Host B. The Oracle Internet Directory processes that were previously running on Physical Host A are then restarted on Virtual Host VHA, and replication is resumed.
LDAP applications communicating directly with Oracle Internet Directory Node 1 by using host name VHA experience a momentary service outage. After the failover is complete, these applications must reconnect by using the same host name, namely, VHA. The momentary LDAP outage can be avoided completely if the two Oracle Internet Directory nodes are front-ended by a LAN redirector for load balancing.
You start Oracle Internet Directory and Oracle Directory Integration and Provisioning in the following order:
Make sure the OracleAS Metadata Repository database is running.
On the active node:
Set the ORACLE_HOME
environment variable to the directory where you installed Oracle Internet Directory.
Run OPMN to start Oracle Internet Directory and Oracle Directory Integration and Provisioning.
ORACLE_HOME/opmn/bin/opmnctl startall
Start up Application Server Control.
ORACLE_HOME/bin/emctl start iasconsole
To stop the processes, run the following steps on the active node:
Set the ORACLE_HOME
environment variable to the directory where you installed Oracle Internet Directory.
Run OPMN to stop Oracle Internet Directory and Oracle Directory Integration and Provisioning.
ORACLE_HOME/opmn/bin/opmnctl stopall
Stop Application Server Control.
ORACLE_HOME/bin/emctl stop iasconsole
You can use Application Server Control to manage Oracle Internet Directory and Oracle Directory Integration and Provisioning components on the active node.
In the Application Server Control URL, you use the physical hostname of the active node. For example: http://im1.mydomain.com:1156
(assuming Application Server Control is running on port 1156).
You back up files for the Oracle Identity Management components using the OracleAS Backup and Recovery Tool. This tool is described in the Oracle Application Server Administrator's Guide.
In this configuration, you run OracleAS Single Sign-On and Oracle Delegated Administration Services on two or more nodes in an OracleAS Cluster configuration. All the nodes in the OracleAS Cluster are active ("active-active" instead of "active-passive"), and these nodes are front-ended by a hardware load balancer to enable load balancing and failover between the nodes.
You install the Oracle home directory on the local storage of each node. These nodes do not need to be in a hardware cluster.
You also need to configure virtual hostnames for HTTP, HTTPS, LDAP, and LDAPS traffic on the load balancer.
Figure 8-12 OracleAS Single Sign-On and Oracle Delegated Administration Services in an Active-Active Configuration
OracleAS Single Sign-On and Oracle Delegated Administration Services are deployed in the same OC4J_SECURITY
instance on each of the nodes in the OracleAS Cluster.
OPMN also runs on each node in this tier. It manages the OC4J and Oracle HTTP Server processes.
Accessing OracleAS Single Sign-On and Oracle Delegated Administration Services
To access OracleAS Single Sign-On and Oracle Delegated Administration Services, clients send requests to the load balancer, using the load balancer's HTTP virtual hostname (for example, sso.mydomain.com
in Figure 9-5).
Running OPMN
OPMN runs on each node to monitor the processes. If an OPMN-managed component fails, OPMN tries to restart it. See Section 2.2.1.1.1, "Automated Process Management with OPMN", which describes OPMN and the components that it manages.
Configuring OracleAS Single Sign-On and Oracle Delegated Administration Services
In an OracleAS Cluster, OracleAS Single Sign-On and Oracle Delegated Administration Services have the same configuration across all nodes in the cluster. This enables the load balancer to forward requests to any instance.
For example, if you have two nodes, then OracleAS Single Sign-On instances running on both nodes will have the same configuration, and Oracle Delegated Administration Services instances will also have the same configuration. The load balancer can send requests to either node.
Running Middle Tiers in the Same Tier
You can run Oracle Application Server middle tiers on different nodes on the same tier (see Figure 9-5). If there is no firewall separating the middle tier and OracleAS Single Sign-On and Oracle Delegated Administration Services, you can use the same load balancer to load balance the middle tiers also. The load balancer is configured with two virtual server names: sso.mydomain.com
and mt.mydomain.com
.
Topologies that Use This Configuration
Section 9.3, "Distributed OracleAS Cold Failover Cluster (Infrastructure)Topology"
Section 9.7, "DistributedOracleAS Cluster (Identity Management) Topology"
Section 9.5, "DistributedOracleAS Cold Failover Cluster (Identity Management) Topology"
The OracleAS Single Sign-On and Oracle Delegated Administration Services instances need to contain common configuration files. If you need to change the configuration for one instance, you also need to update the configuration in other instances in the OracleAS Cluster.
To ensure that configuration files stay the same across the OracleAS Cluster:
Use the following command to save configuration changes related to OPMN, Oracle HTTP Server, or OC4J_SECURITY on one OracleAS Cluster node:
ORACLE_HOME/dcm/bin/dcmctl updateConfig
The "dcmctl
updateConfig
" command propagates configuration changes across the OracleAS Cluster nodes.
Configuration changes to Oracle Internet Directory are not automatically managed across the OracleAS Cluster. If you make changes to configuration files, primarily the wallet files, you need to make the same changes manually to all nodes in the OracleAS Cluster.
In an OracleAS Cluster (Identity Management) for OracleAS Single Sign-On and Oracle Delegated Administration Services, if a node within the cluster fails, the cluster has other nodes which can take over for the failed node, just like with other OracleAS Cluster (Identity Management). This failover requires a load balancer to detect failures and re-route the requests to the remaining nodes that are running.
OPMN manages Oracle Application Server processes and restarts crashed processes, when possible.
On Windows systems, if one of the cluster nodes goes down, Oracle Fail Safe detects the failure and initiates a failover of the managed Oracle services immediately.
If OC4J_SECURITY is down on a node, the active Oracle HTTP Servers direct traffic to a surviving OC4J_SECURITY instance (this is by virtue of the fact that they are clustered). If Oracle HTTP Server is down on a node, then the surviving Oracle HTTP Server on the other node services the request. When the Oracle HTTP Server services the request, Oracle Internet Directory monitor polls the Oracle Database server to verify that all other Oracle Internet Directory nodes are running. If, after five minutes an Oracle Internet Directory monitor on one of the nodes has not reported, then the other Oracle Internet Directory nodes regard it as having failed. At this point, the following occurs on one of the other nodes that are still running:
The Oracle Internet Directory monitor on that node brings up the processes that were running on the failed node.
The Oracle Internet Directory on that node continues processing the operations that were previously underway on the failed node.
The Oracle Internet Directory monitor on that node logs that it has brought up the processes that were previously running on the failed node.
Note 1: When a node goes down or the processes on a node are brought down due to planned maintenance operations, the load balancer should be reconfigured to not send traffic to this node. |
Note 2: If the primary node running either the directory replication server (oidrepld ), or the Oracle Directory Integration and Provisioning server (odisrv ), or both, fails, then the Oracle Internet Directory monitor on the secondary node starts these processes on the secondary node after five minutes.
Normal shutdown is not treated as a failover - that is, after a normal shutdown, the Oracle Internet Directory monitor on the secondary node does not start these processes on the secondary node after five minutes. |
OPMN runs on each node in the OracleAS Cluster to provide process management, monitoring, and notification services for the OC4J_SECURITY
instances and Oracle HTTP Server processes. If any of these processes fails, OPMN detects the failure and attempts to restart it. If the restart is unsuccessful, the load balancer detects the failure (usually through a non-response timeout) and directs requests to an active process running on a different node.
If a node fails, the load balancer detects the failure and redirects requests to an active node in the OracleAS Cluster. Because each node provides identical services as the others, all requests can be fulfilled by the remaining nodes.
You start up OracleAS Single Sign-On and Oracle Delegated Administration Services in the following order:
Make sure the OracleAS Metadata Repository database is running.
Make sure Oracle Internet Directory is running.
On each node:
Set the ORACLE_HOME
environment variable to the OracleAS Single Sign-On / Oracle Delegated Administration Services Oracle home.
Run OPMN to start up the components.
ORACLE_HOME/opmn/bin/opmnctl startall
Start up Application Server Control.
ORACLE_HOME/bin/emctl start iasconsole
To stop OracleAS Single Sign-On and Oracle Delegated Administration Services, run the following steps on each node:
Set the ORACLE_HOME
environment variable to the OracleAS Single Sign-On/Oracle Delegated Administration Services Oracle home.
Run OPMN to stop the components.
ORACLE_HOME/opmn/bin/opmnctl stopall
Stop Application Server Control.
ORACLE_HOME/bin/emctl stop iasconsole
You can use Application Server Control to manage the OracleAS Single Sign-On and Oracle Delegated Administration Services components on each node.
In the Application Server Control URL, you use the physical hostname. For example: http://im1.mydomain.com:1156
(assuming Application Server Control is running on port 1156).
You back up files for the OracleAS Single Sign-On and Oracle Delegated Administration Services components using the OracleAS Backup and Recovery Tool. This tool is described in the Oracle Application Server Administrator's Guide.
In an active-passive (also called a cold failover cluster) configuration, you have two nodes in a hardware cluster. You install the OracleAS Single Sign-On and Oracle Delegated Administration Services components on the storage shared by these nodes.
Figure 8-13 OracleAS Single Sign-On and Oracle Delegated Administration Services in an Active-Passive Configuration
In this configuration, only one node is active at any time. This active node runs all the processes. The other node, called the passive or standby node, runs only when the active node fails or when components on the active node fail to run.
You need to configure a virtual server name and virtual IP address for these nodes in the hardware cluster. The virtual server name and virtual IP address point to the node that is the active node.
The nodes in the hardware cluster also run clusterware that is provided by the hardware vendor. The clusterware monitors the active node to ensure that it is running.
To access the OracleAS Single Sign-On and Oracle Delegated Administration Services components, clients send requests using the virtual server name.
OPMN also runs on each node. If an OPMN-managed component fails, OPMN tries to restart it. See Section 2.2.1.1.1, "Automated Process Management with OPMN", which describes OPMN and the components that it manages.
OracleAS Single Sign-On and Oracle Delegated Administration Services are applications deployed on the OC4J_SECURITY instance.
Topologies that Use This Configuration
OPMN runs on the active node to provide process management, monitoring, and notification services for the OC4J_SECURITY
and Oracle HTTP Server processes. If any of these processes fails, OPMN detects the failure and attempts to restart it. If the restart is unsuccessful, the clusterware detects the failure and fails over all the processes to the passive node.
If a node fails, the clusterware detects the failure and fails over all the processes to the passive node.
Note: While the hardware cluster framework can start, monitor, or fail over OracleAS Infrastructure processes, the following actions are not automatic. You have to do them manually, or you can create some scripts to automate them. |
Perform the following steps to fail over from the active node to the standby node for Solaris systems with a Veritas Volume Manager.
On the failed node:
If necessary, stop or kill all OracleAS Single Sign-On and Oracle Delegated Administration Services processes on this node.
Ensure that the file system is not busy. If it is busy, check which processes are using the file system and stop them if required.
Unmount the file system using the following command:
# umount <mount_point>
As root, deport the disk group. For example, if you are using Sun Cluster with Veritas Volume Manager, deport the disk group using the following commands:
# vxdg deport <disk_group_name>
If the failed node is usable, execute this command to release the virtual IP address:
# ifconfig <interface_name> removeif <virtual_IP>
On the new active node:
As root, execute the following command to assign the virtual IP to this node:
# ifconfig <interface_name> addif <virtual_IP> up
As root, import the disk group. For example, if you are using Sun Cluster with Veritas Volume Manager, use the following commands:
# vxdg import <disk_group_name> # vxvol -g <disk_group_name> startall
As root, mount the file system using the following command:
# mount /dev/vx/dsk/<disk_group_name>/<volume_name> <mount_point>
Start all OracleAS Single Sign-On and Oracle Delegated Administration Services processes on this new active node. See Section 8.10.5, "Starting OracleAS Single Sign-On / Oracle Delegated Administration Services".
On Windows, you use Oracle Fail Safe to perform the failover. See Section 8.6.3, "Manual Steps for Failover on Windows Systems" for details.
Perform the following steps to fail over from the active node to the standby node on Linux systems.
On the failed node:
Make sure all the OracleAS Single Sign-On and Oracle Delegated Administration Services processes are stopped on the failed node.
Login as root.
Unmount the file system using the following command:
# umount <mount_point>
If the file system is busy, check which processes are using the file system with the following command:
# fuser -muv <Shared Storage Partition>
Stop the processes, if required, using the following command:
# fuser -k <Shared Storage Partition>
If the failed node is usable, execute the following command to release the virtual IP address:
# ifconfig <interface_name> down
For example,
# ifconfig eth1:1 down
On the new active node:
Login as root.
Execute the following command to assign the virtual IP address to this node (the new active node):
# ifconfig <interface_name> netmask <subnet_mask> up
For example,
# ifconfig 144.88.27.125 netmask 255.255.252.0 up
Verify that the virtual IP is up and working using telnet
from a different host (subnet/domain).
Mount the file system using the following command:
# mount <Shared Storage Partition> <mount_point>
For example:
# mount /dev/sdc1 /oracle
Start OracleAS Single Sign-On and Oracle Delegated Administration Services on this new active node. See Section 8.10.5, "Starting OracleAS Single Sign-On / Oracle Delegated Administration Services".
You start up OracleAS Single Sign-On and Oracle Delegated Administration Services in the following order:
Make sure the OracleAS Metadata Repository database is running.
Make sure Oracle Internet Directory is running.
On the active node:
Set the ORACLE_HOME
environment variable to the OracleAS Single Sign-On/Oracle Delegated Administration Services Oracle home.
Run OPMN to start up OracleAS Single Sign-On and Oracle Delegated Administration Services.
ORACLE_HOME/opmn/bin/opmnctl startall
Start up Application Server Control.
ORACLE_HOME/bin/emctl start iasconsole
To stop OracleAS Single Sign-On and Oracle Delegated Administration Services, run the following steps on the active node:
Set the ORACLE_HOME
environment variable to the OracleAS Single Sign-On/Oracle Delegated Administration Services Oracle home.
Run OPMN to stop OracleAS Single Sign-On and Oracle Delegated Administration Services.
ORACLE_HOME/opmn/bin/opmnctl stopall
Stop Application Server Control.
ORACLE_HOME/bin/emctl stop iasconsole
You can use Application Server Control to manage the components on the active node.
In the Application Server Control URL, you use the physical hostname of the active node. For example: http://im1.mydomain.com:1156
(assuming Application Server Control is running on port 1156).
You back up files for OracleAS Single Sign-On and Oracle Delegated Administration Services using the OracleAS Backup and Recovery Tool. This tool is described in the Oracle Application Server Administrator's Guide.
Use the following steps to check the status of Oracle Identity Management components:
Check the status of OPMN and OPMN-managed processes:
ORACLE_HOME/opmn/bin/opmnctl status
Check the status of Application Server Control.
ORACLE_HOME/bin/emctl status iasconsole
Check the status of Oracle Internet Directory:
ORACLE_HOME/ldap/bin/ldapcheck
Verify that you can log in to Oracle Internet Directory:
ORACLE_HOME/bin/oidadmin
Use the following login and password:
Login: orcladmin
Password: <orcladmin_password>
After installation, the orcladmin_password is the same as the ias_admin password.
Verify you can log in to OracleAS Single Sign-On:
http://host:HTTP_port/pls/orasso
For host, you specify the virtual hostname.
Login: orcladmin
Password: orcladmin_password
Verify you can log in to Oracle Delegated Administration Services:
http://host:HTTP_port/oiddas
For host, you specify the virtual hostname.
Login: orcladmin
Password: orcladmin_password