Oracle® Application Server Administrator's Guide
10g Release 2 (10.1.2) B13995-06 |
|
Previous |
Next |
This chapter describes copying a 9.0.4.1 or 10.1.2.0.2 production environment that includes a middle-tier instance, Identity Management with a Metadata Repository, and a product Metadata Repository for OracleAS Portal and OracleBI Discoverer metadata to a test environment. The Identity Management and the product metadata can share the same Metadata Repository, or they each use a dedicated Metadata Repository. You use this procedure for the purpose of:
Periodically taking a snapshot of the production environment
Testing an upgrade to 10.1.2.0.2
Testing to apply a patchset
If you need to only additional middle-tier nodes, perform the procedures in Chapter 10 instead.
This chapter contains the following topics:
In this configuration, you have an existing production environment that includes a middle-tier instance, an Identity Management installation with a Metadata Repository, a product Metadata Repository for OracleAS Portal and OracleBI Discoverer metadata, and customer database. You would like to create a copy of this production environment on the test environment.
For this configuration, you create a test environment by installing and subsequently moving production data for the Identity Management and product Metadata Repository. You then install a test middle-tier instance and deploy applications to this instance. You use the existing customer database, importing the OracleBI Discoverer data.
Figure 11-1 shows this configuration.
Figure 11-1 Creating a Test Environment from a Production Environment and Copying Metadata
This use case assumes the following configuration:
The production environment includes a middle-tier instance, an Identity Management installation with a Metadata Repository, and an additional Metadata Repository for (OracleAS Portal and OracleBI Discoverer) product metadata. This procedure also applies to configurations in which both Identity Management and product metadata share the same Metadata Repository.
The production environment accesses a customer database that contains OracleBI Discoverer End User Layers (EULs).
The production environment is configured with release 9.0.4.1 or 10.1.2.0.2.
The new test environment has not been created.
This procedure contains the following tasks:
Task 2: Copy Data from Production Identity Management to the Test Environment
Task 4: Copy Data from the Production Product Metadata Repository to the Test Environment
In the procedures, INFRA_HOME
references the Oracle home of OracleAS Infrastructure, and MIDTIER_HOME
references the Oracle home of Oracle Application Server.
Prerequisite Export Steps
See Also: Oracle Identity Management User Reference for more information about the Oracle Internet Directory management tools mentioned in this section, includingldapsearch and ldifwrite
|
To obtain information and export data from the production environment prior to setting up the test environment:
Collect the following information needed to access the production environment:
Operating system login credentials to computer on which the Metadata Repositories reside
Operating system login credentials and Oracle home location of the middle-tier computer and location of the Oracle home
Oracle Internet Directory connection information, including the host name, port, and administration DN and password
ODS
schema password
PORTAL
schema password
ORASSO
schema password
Export data from the production Oracle Internet Directory:
Query the production directory server to find the default realm:
PRODUCTION_INFRA_HOME/bin/ldapsearch -h production_oid_host -p production_oid_port -D cn=orcladmin -w production_orcladmin_passwd -b "cn=common, cn=products, cn=oraclecontext" -s base "objectclass=*" orcldefaultSubscriber
In the syntax:
production_oid_host
is the host of the production directory server.
production_oid_port
is the LDAP port of the production directory server.
production_orcladmin_password
is the password of the superuser DN (cn=orcladmin
).
The response returns the orcldefaultSubscriber
, in LDIF format:
cn=Common,cn=Products,cn=OracleContext orcldefaultsubscriber=dc=us,dc=company,dc=com
You will need this information later when you install the test Identity Management.
Query the production directory server to determine the configured user search bases and group search bases:
PRODUCTION_INFRA_HOME/bin/ldapsearch -L -h production_oid_host -p production_oid_port -D cn=orcladmin -w production_orcladmin_passwd -b "cn=common, cn=products, cn=OracleContext, orcldefaultsubscriber" -s base "objectclass=*" orclCommonUserSearchBase orclCommonGroupSearchBase orclCommonNicknameattribute > /tmp/OracleContext.ldif
In the syntax, orcldefaultsubscriber
is the orcldefaultsubscriber
value returned in Step 2a of this procedure.
The response looks similar to the following:
dn: cn=Common,cn=Products,cn=OracleContext,dc=us,dc=company,dc=com orclcommonusersearchbase: cn=users,dc=us,dc=company,dc=com orclcommongroupsearchbase: cn=Groups,dc=us,dc=company,dc=com orclcommonnicknameattribute: uid
Export all users from the production directory server:
PRODUCTION_INFRA_HOME/bin/ldifwrite
-c production_oid_net_service_name -b "orclcommonusersearchbase
"
-f /tmp/user.ldif
In the syntax:
production_oid_net_service_name
specifies the net service name for the production directory, as defined in the tnsnames.ora
file.
orclcommonusersearchbase
is the orclcommonusersearchbase
subtree written out in LDIF format, such as cn=users,dc=us, dc=company,dc=com
.
When prompted for the directory password, enter the password of the ODS
user. This password defaults to the password assigned for administrator ias_admin
during installation. You should run the ldifwrite
command once for each user search base value (orclcommonusersearchbase
) from the Oracle Context settings query.
Export all groups from the production directory server:
PRODUCTION_INFRA_HOME/bin/ldifwrite
-c production_oid_net_service_name -b "orclcommongroupsearchbase
"
-f /tmp/group.ldif
In the syntax, orclcommongroupsearchbase
is the orclcommongroupsearchbase
subtree written out in LDIF format, such as cn=Groups,dc=us, dc=company,dc=com
.
When prompted for the directory password, enter the password of the ODS
user. This password defaults to the password assigned for administrator ias_admin
during installation. You should run the ldifwrite
command once for each group search base value (orclcommongroupsearchbase
) from the Oracle Context settings query.
Save the Delegated Administration Services administrative groups subtree from the production directory server:
PRODUCTION_INFRA_HOME/bin/ldifwrite -c production_oid_net_service_name -b "cn=groups, cn=OracleContext, orcldefaultsubscriber" -f /tmp/dasAdminGroups.ldif
Copy the password policy configuration entry, and append the entry with the following ldapsearch
commands:
PRODUCTION_INFRA_HOME/bin/ldapsearch -L -h production_oid_host -p production_oid_port -D cn=orcladmin -w production_orcladmin_passwd -b "cn=PwdPolicyEntry, cn=common, cn=products, cn=OracleContext" -s base "objectclass=*" >> /tmp/pwdPolicy.ldif
PRODUCTION_INFRA_HOME/bin/ldapsearch -L -h production_oid_host -p production_oid_port -D cn=orcladmin -w production_orcladmin_passwd -b "cn=PwdPolicyEntry, cn=common, cn=products, cn=OracleContext, orcldefaultsubscriber" -s base "objectclass=*" >> /tmp/pwdPolicy.ldif
In the syntax, orcldefaultsubscriber
is the orcldefaultsubscriber
value returned in Step 2a of this procedure.
Prepare the product Metadata Repository in the production environment for export:
List the schemas to be exported:
It is necessary to identify all of the schemas that need to be exported, including the PORTAL
and related schemas, the DISCOVERER5
schema, and any schemas used for database providers or OracleAS Portal Portlet Builder components.
To list all the schemas, run the following query from SQL*Plus in the PORTAL
schema:
SET LINESIZE 132 SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE FROM DBA_USERS WHERE USERNAME IN (user, user||'_PUBLIC', user||'_DEMO', user||'_APP', 'DISCOVERER5') OR USERNAME IN (SELECT DISTINCT OWNER FROM WWAPP_APPLICATION$ WHERE NAME != 'WWV_SYSTEM') ORDER BY USERNAME;
The response looks similar to the following output:
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE ------------------------------ ------------------------------ ------------------------------ ARUN USERS TEMP DISCOVERER5 DISCO_PTM5_META DISCO_PTM5_TEMP PORTAL PORTAL PORTAL_TMP PORTAL_APP PORTAL PORTAL_TMP PORTAL_DEMO PORTAL PORTAL_TMP PORTAL_PUBLIC PORTAL PORTAL_TMP SCOTT SYSTEM TEMP TESTER_DAT SYSTEM TEMP TESTER_SCH SYSTEM TEMP UPG_SCHEMA USERS TEMP UPG_SCHEMDB USERS TEMP 10 rows selected
This command will only list schemas that are directly related to database providers or Portlet Builder components registered in the OracleAS Portal. If any of these schemas additionally reference objects in other schemas, then add them to the list of schemas to be exported.
List the tablespaces used.
It is necessary to ensure that the tablespaces on the target databases match the ones used in the source. To list the tablespaces used in the source database, run the following query from SQL*Plus as the PORTAL
user:
SELECT DISTINCT TABLESPACE_NAME FROM DBA_SEGMENTS WHERE OWNER IN (schema_list) UNION SELECT DISTINCT DEFAULT_TABLESPACE FROM DBA_USERS WHERE USERNAME IN (schema_list) / SELECT DISTINCT TEMPORARY_TABLESPACE FROM DBA_USERS WHERE USERNAME IN (schema_list) /
where schema_list
contains the schemas generated from Step 3a of this procedure.
The query and response looks similar to the following output:
SQL> SELECT DISTINCT TABLESPACE_NAME FROM DBA_SEGMENTS WHERE OWNER IN 2 ('ARUN','PORTAL','DISCOVERER5','PORTAL_APP','PORTAL_DEMO','PORTAL_PUBLIC','SCOTT', 3 'TESTER_DAT','TESTER_SCH','UPG_SCHEMA','UPG_SCHEMDB','FLIGHTS','PROV9022') 4 UNION 5 SELECT DISTINCT DEFAULT_TABLESPACE FROM DBA_USERS WHERE USERNAME IN 6 ('ARUN','PORTAL', 'DISCOVERER5','PORTAL_APP','PORTAL_DEMO','PORTAL_PUBLIC','SCOTT', 7 'TESTER_DAT','TESTER_SCH','UPG_SCHEMA','UPG_SCHEMDB','FLIGHTS','PROV9022'); TABLESPACE_NAME ------------------------------ INDX DISCO_PTM5_CACHE DISCO_PTM5_META PORTAL PORTAL_DOC PORTAL_IDX PORTAL_LOG SYSTEM USERS 7 rows selected. SQL> SELECT DISTINCT TEMPORARY_TABLESPACE FROM DBA_USERS WHERE USERNAME IN ('ARUN','PORTAL','PORTAL_APP','PORTAL_DEMO','PORTAL_PUBLIC','SCOTT','TESTER_DAT','TESTER_SCH','UPG_SCHEMA','UPG_SCHEMDB','FLIGHTS','PROV9022'); TEMPORARY_TABLESPACE ------------------------------ DISCO_PTM5_TEMP PORTAL_TMP TEMP
List schemas that have snapshots created in them.
It is necessary to identify the schemas that have snapshots created in them, run the following query from SQL*Plus as the SYS
user:
SQL> SELECT OWNER, NAME FROM DBA_SNAPSHOTS WHERE OWNER IN (schema_list);
where schema_list
contains the schemas generated from Step 3a of this procedure.
The query and response looks similar to the following output:
SQL> SELECT OWNER, NAME FROM DBA_SNAPSHOTS 2 WHERE OWNER IN ('ARUN','DISCOVERER5','PORTAL','PORTAL_APP','PORTAL_DEMO','PORTAL_PUBLIC','SCOTT','TESTER_DAT','TESTER_SCH','UPG_SCHEMA','UPG_SCHEMDB'); OWNER NAME ------------------------------ ------------------------------PORTAL_DEMO EMP_SNAPSHOT
SCOTT EMP_SNAPSHOT
Determine the character set of the production database:
SQL> SELECT userenv('language') from dual; USERENV('LANGUAGE') ---------------------------------------------------- AMERICAN_AMERICA.WE8MSWIN1252
Set the NLS_LANG
environment variable before performing the database export:
setenv NLS_LANG AMERICAN_AMERICA.WE8MSWIN1252
Prior to exporting data, run the CATEXP.SQL
script from the PRODUCTION_INFRA_HOME
/rdbms/admin
directory with SYSDBA
privileges:
CONNECT SYS/password AS SYSDBA
@catexp.sql
This script creates the data dictionary and public synonyms for many of its views.
Export data from the product Metadata Repository in the production environment:
Stop the Oracle HTTP Server:
PRODUCTION_INFRA_HOME/opmn/bin/opmnctl stopall
Run the Export utility:
The actual export is done with the database exp
command as follows, with the example showing just the four core OracleAS Portal schemas. Include any other schemas identified in the SELECT
statement from Step 3a of this procedure as well.
PRODUCTION_INFRA_HOME/bin/exp portal/portal_password@portal_net_service_name file=/tmp/portal_exp.dmp grants=y statistics=none log=/tmp/portal_exp.log owner=portal,portal_app,portal_demo,portal_public
If the export log contains EXP-00079
, ensure that virtual private portals are not set up:
EXP-00079: Data in table "WWA_MODULE_EVENTS$" is protected. Convential path may only be exporting partial table.
See Also: Oracle Application Server Portal Configuration Guide for more information about virtual private portals |
Start the Oracle HTTP Server:
PRODUCTION_INFRA_HOME/opmn/bin/opmnctl startall
Export data from OracleAS Single Sign-On in the production Identity Management by extracting the OracleAS Single Sign-On data with the ssomig
utility:
PRODUCTION_INFRA_HOME/sso/bin/ssomig -s orasso -p orasso_password -c production_sso_net_service_name -log_d /tmp -export -log_f ssomig.log -d ssomig.dmp
The response looks similar to the following output:
SSO Migration Tool: Release 9.0.4.0.0 - Production on Fri Feb 25 16:15:19 2005 Copyright (c) 2002-2005 Oracle. All rights reserved. Verifying SSO schema information... Data export started at Fri Feb 25 16:15:22 2005 Log Directory : /tmp Log File Name : /tmp/ssomig.log Export File Name : /tmp/ssomig.dmp SSO Schema Name : orasso SSO Schema Password : ****** Connect String : asdbupg Copying data...Done Exporting data...Done Creating configuration file...Done Exported Dump File: /tmp/ssomig.dmp Configuration File: /tmp/ssoconf.log Data Export Complete
If your production environment includes OracleBI Discoverer, export EUL data:
For each EUL in the customer database, create a copy using the following command from the production environment:
PRODUCTION_MIDTIER_HOME/bin/eulapi - connect old_eul_owner/password@database_service_name -export /tmp/file1.eex -all -log /tmp/exp.log
If you intend to copy data to another customer database in the test environment, use the service name of the test database for database_service_name
.
Note: Theeulapi utility does not exist on Windows in releases 9.0.4.1 and 10.1.2.0.2. For environments supporting these releases, download the patches for this utility. The related Automated Release Updates (ARU) number for 9.0.4.1 is ARU 7462620 and 10.1.2.0.2 is ARU 7462623.
You can download these patches from OracleMetalink: |
Task 1: Install Test Identity Management
Install and set up the test Identity Management and its associated Metadata Repository. You can install these OracleAS Infrastructure components on the same computer or on separate computers.
To install the test Identity Management:
Install Oracle Application Server using Oracle Universal Installer.
From the Select a Product to Install screen, select OracleAS Infrastructure.
From the Select Installation Type screen, select the appropriate option for how you want the Identity Management pieces installed:
To install test Identity Management and its associated Metadata Repository on the same computer, select Identity Management and OracleAS Metadata Repository.
To install test Identity Management and its associated Metadata Repository on separate computers, first select the OracleAS Metadata Repository option first for one of the computers, and then select the OracleAS Metadata Repository option for the other computer.
When you are prompted with the Specify Namespace in Internet Directory screen, enter the value of the orcldefaultsubscriber
obtained from the query of the production Oracle Internet Directory server in Step 2a of "Prerequisite Export Steps".
Create the database using the same character set as the production instance that you are copying to avoid any character set conversion issues. You do not have to specify the same port numbers, but it can help simplify the configuration.
Perform a backup of the test Identity Management configuration files and Metadata Repository with the OracleAS Backup and Recovery Tool. See Chapter 20.
Task 2: Copy Data from Production Identity Management to the Test Environment
See Also: Oracle Identity Management User Reference for more information about the Oracle Internet Directory management tools mentioned in this section, includingldapmodify , ldapsearch , bulkload , and bulkdelete
|
Note: To run the Oracle Internet Directory management tools on Windows operating systems, you need one of the following UNIX emulation utilities:
|
To copy Identity Management data to the test environment:
Prior to importing users and groups to the test directory server, use the bulkdelete
command to remove the existing default users, groups containers, and Delegated Administration Services administrative groups subtree from the test directory server.
Stop the directory server with the following command:
TEST_INFRA_HOME/opmn/bin/opmnctl stopproc ias-component=OID
Run the following commands:
TEST_INFRA_HOME/ldap/bin/bulkdelete.sh -connect test_oid_net_service_name -base "cn=users, orcldefaultsubscriber"
TEST_INFRA_HOME/ldap/bin/bulkdelete.sh -connect test_oid_net_service_name -base "cn=groups, orcldefaultsubscriber"
TEST_INFRA_HOME/ldap/bin/bulkdelete.sh -connect test_oid_net_service_name -base "cn=groups, cn=OracleContext, orcldefaultsubscriber"
In the syntax:
test_oid_net_service_name
specifies the net service name for the test directory, as defined in the tnsnames.ora
file.
orcldefaultsubscriber
is the orcldefaultsubscriber
value returned from Step 2a of "Prerequisite Export Steps".
You will be prompted to provide the password to the ODS
schema to complete these commands.
Use the bulkload
utility to load users from the user.ldif
file you created in Step 2c of "Prerequisite Export Steps":
Start the directory server with the following command:
TEST_INFRA_HOME/opmn/bin/opmnctl startproc ias-component=OID
To run the bulkload
utility, set the directory server mode to read/modify:
On Windows, from Oracle Directory Manager, navigate to the server entry (the main node under the Oracle Internet Directory Servers), and change the Server Mode attribute from Read/Write to Read/Modify from the drop-down list.
If you prefer to use the LDAP command line utilities, use the ldapmodify
command:
ldapmodify -h test_oid_host -p test_oid_port -D cn=orcladmin -w test_orcladmin_pwd -v -f rm.ldif
where rm.ldif
is a file you create, with the following contents:
dn: changetype: modify replace: orclservermode orclservermode: rm
After changing the server mode, you need to stop the Oracle Internet Directory processes:
TEST_INFRA_HOME/opmn/bin/opmnctl stopproc ias-component=OID
If the production Identity Management has been upgraded from release 9.0.2.3 to 9.0.4.1, remove the entries starting with orclactivestartdate
from the user.ldif
file. If these entries are not removed, the bulkload
utility will fail.
Load users into the test Oracle Internet Directory by using the bulkload
utility to load the LDIF file generated from the production system. You created this file in Step 2c of "Prerequisite Export Steps". When invoking the bulkload
utility, be sure to specify the absolute path of the LDIF file, even if it is in the current directory.
TEST_INFRA_HOME/ldap/bin/bulkload.sh -connect test_oid_net_service_name -check –generate -restore -load –append /tmp/user.ldif
When invoking the bulkload
utility, be sure to specify the absolute path of the LDIF file, even if it is in the current directory.
The response looks similar to the following output:
Verifying node "orcl" ----------------------------- This tool can only be executed if you know database user password for OiD on orcl Enter OiD password ::
Provide the password for the schema used by Oracle Internet Directory. This defaults to the password assigned for the ias_admin
administrator during installation.
This command loads all the users provided there is no error reported in the check mode on the LDIF file exported from production Oracle Internet Directory.
Start the directory server with the following command:
TEST_INFRA_HOME/opmn/bin/opmnctl startproc ias-component=OID
Move the pwdPolicy.ldif
file you created in Step 2f of "Prerequisite Export Steps" to the test environment and stop the directory server with the following commands:
TEST_INFRA_HOME/bin/ldapaddmt -h test_oid_host -p test_oid_port -D cn=orcladmin -w test_orcladmin_passwd -v -f /tmp/pwdPolicy.ldif TEST_INFRA_HOME/opmn/bin/opmnctl stopproc ias-component=OID
Ensure that the password policy in the default subscriber matches that of the production environment.
The ldapaddmt
command will only work for 9.0.4 or later releases. For older releases, you must manually update the password policy entry.
Use the bulkload
utility to load groups from the group.ldif
file you created in Step 2d of "Prerequisite Export Steps".
TEST_INFRA_HOME/ldap/bin/bulkload.sh -connect test_oid_net_service_name -check -generate -restore -load –append /tmp/group.ldif
When invoking the bulkload
utility, be sure to specify the absolute path of the LDIF file, even if it is in the current directory.
This command will load all the groups provided there is no error reported in the check mode on the LDIF file exported from production Oracle Internet Directory.
Load the administrative groups subtree saved from production in Step 2e of "Prerequisite Export Steps" into the test environment.
TEST_INFRA_HOME/ldap/bin/bulkload.sh -connect test_oid_net_service_name -check -generate -restore -load -append /tmp/dasAdminGroups.ldif
Start the test directory server and other Identity Management components:
cd TEST_INFRA_HOME/opmn/bin
opmnctl startall
opmnctl: starting opmn and all managed processes...
After performing the bulkload commands, restore the server mode of the directory server to the normal setting:
ldapmodify -h test_oid_host -p test_oid_port -D cn=orcladmin -w test_orcladmin_pwd -v -f rw.ldif
where rw.ldif
is a file you create, with the following contents:
dn: changetype: modify replace: orclservermode orclservermode: rw
If you use Oracle Directory Manager instead, change the Server Mode attribute of the server entry back to Read/Write
On the test directory server, configure Oracle Context parameters:
In Step 2b of "Prerequisite Export Steps", you created an /tmp/OracleContext.ldif
file containing the configured user and group search bases. Edit that file as follows, using the respective output values obtained in Step 2b in place of value
. Note that OrclCommonUserSearchBase
might contain only one value.
dn: cn=common, cn=products,cn=OracleContext, default_subscriber Changetype: modify replace: orclCommonUserSearchBase OrclCommonUserSearchBase: value1 OrclCommonUserSearchBase: value2 - replace: orclCommonGroupSearchBase OrclCommonGroupSearchBase: value1 OrclCommonGroupSearchBase: value2 - replace: orclCommonNickNameattribute orclCommonNickNameattribute: value
Configure the user and group search base on the test directory server:
TEST_INFRA_HOME/bin/ldapmodify –D cn=orcladmin –w orcladmin_password -h test_oid_host -p test_oid_port -v -f /tmp/OracleContext.ldif
Change the value of the orclcommonnicknameattribute
:
Run the ldapsearch
command to see the test nickname attribute:
TEST_INFRA_HOME/bin/ldapsearch -h test_oid_host -p test_oid_port -D cn=orcladmin -w test_orcladmin_passwd -b "cn=Common,cn=Products,cn=OracleContext,dc=us,dc=company,dc=com" -s base "objectclass=*" orclcommonnicknameattribute
The response returns the orclcommonnicknameattribute
value:
orclcommonnicknameattribute: uid
Compare this value with the production nickname attribute, obtained in Step 2b of "Prerequisite Export Steps".
For instances which have been upgraded from 9.0.2.3 to 9.0.4.1, the production nickname attribute may not be set to orclcommonnicknameattribute=cn
. For these instances, you need to update the test directory server with this value.
You can navigate to that entry from Oracle Directory Manager and change its value.
If you prefer to use the LDAP command line utilities, use the ldapmodify
command:
ldapmodify -h test_oid_host -p test_oid_port -D cn=orcladmin -w test_orcladmin_pwd -f nickname.ldif
where nickname.ldif
is a file you create, with the following contents:
dn: cn=Common,cn=Products,cn=OracleContext,dc=us,dc=company,dc=com changetype: modify replace: orclcommonnicknameattribute orclcommonnicknameattribute: cn
Perform Step 8a of this procedure to ensure the value of orclcommonnicknameattribute
has been correctly updated.
Import OracleAS Single Sign-On External Applications data.
This step imports any external applications and password store data that was exported in Step 5 of procedure "Prerequisite Export Steps". Extract the OracleAS Single Sign-On data with the ssomig
utility in -import
mode as follows:
TEST_INFRA_HOME/sso/bin/ssomig -import -overwrite -s orasso -p orasso_pwd -c test_sso_net_service_name -d exp_dumpfile_name -log_d dumpfile_dir –discoforce
Ensure the exp_dumpfile_name
is filename of the dump file and is in the directory dumpfile_dir
directory. The dumpfile_dir
directory will also contain the ssoconf.log
file, which was generated when you ran the ssomig
utility in –export
mode.
You obtain the password for the ORASSO
schema (orasso_pwd
) from Oracle Internet Directory as follows:
TEST_INFRA_HOME/bin/ldapsearch -h test_oid_host -p test_oid_port -D cn=orcladmin -w test_orcladmin_pwd -b "orclreferencename=test_oid_service_name,cn=IAS Infrastructure Databases,cn=ias,cn=products,cn=oraclecontext" -s sub "(orclresourcename=orasso)" orclpasswordattribute
The response for ssomig
looks similar to the following output:
SSO Migration Tool: Release 10.1.2.0.2 - Production on Sun Feb 27 12:10:27 2005 Copyright (c) 2002-2005 Oracle. All rights reserved. Verifying SSO schema information... Data import started at Sun Feb 27 12:10:28 2005 Log Directory : /tmp Log File Name : /tmp/ssomig.log Import File Name : ssomig.dmp SSO Schema Name : orasso SSO Schema Password : ****** Connect String : orcl Mode : overwrite Loading data into the SSO schema...Done Data import completed.
Update the OracleAS Single Sign-On server with directory changes.
Perform a backup of the test Identity Management configuration files and Metadata Repository with the OracleAS Backup and Recovery Tool. See Chapter 20.
Task 3: Set Up Test Product Metadata Repository
To install and set up the test product Metadata Repository:
Identify the character set in use on the production database. The character set is specified in the portal_exp.log
file generated from the exp
output in Step 4b of "Prerequisite Export Steps".
List the ports used on the production database (optional) in order to use the same port numbers on the copy.
Install an OracleAS Infrastructure and Metadata Repository.
Install Oracle Application Server using Oracle Universal Installer.
From the Select a Product to Install screen, select OracleAS Infrastructure.
From the Select Installation Type screen, select Metadata Repository.
Create the database using the same character set as the production instance that you are copying to avoid any character set conversion issues. Also, specify the same ports to use, if you want more fidelity. This should not affect the upgrade testing though, even if using different port numbers in the copy.
Perform a backup of the Metadata Repository database installed with the infrastructure to make is easier to do import iterations in case of upgrade errors. See Chapter 20.
Task 4: Copy Data from the Production Product Metadata Repository to the Test Environment
To copy data from the product Metadata Repository in the production environment to the test environment:
Drop the OracleAS Portal schemas from the product Metadata Repository in the test environment:
Take note of the schema passwords currently assigned to the schemas to be dropped. You will be re-creating these schemas later and you should provide the same password when you create them.
You can query the currently assigned passwords from the newly installed test Oracle Internet Directory:
TEST_INFRA_HOME/bin/ldapsearch -p test_oid_port -h test_oid_host -D cn=orcladmin -w test_orcladmin_password -b "orclReferenceName=portal_service_name,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContext" -s sub "(|(orclResourceName=PORTAL*)(orclResourceName=DISCOVERER5))" orclpasswordattribute
Take note of the passwords listed for the following schemas: DISCOVERER5
, PORTAL
, PORTAL_PUBLIC
, PORTAL_APP
, and PORTAL_DEMO
.
The response looks similar to the following output:
OrclResourceName=PORTAL,orclReferenceName=portal.us.company.com,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContextorclpasswordattribute=DTFDKD58 OrclResourceName=PORTAL_PUBLIC,orclReferenceName=portal.us.company.com,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContextorclpasswordattribute=X9g57NMP OrclResourceName=PORTAL_DEMO,orclReferenceName=portal.us.company.com,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContextorclpasswordattribute=ZyKR9805 OrclResourceName=PORTAL_APP,orclReferenceName=portal.us.company.com,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContextorclpasswordattribute=IGqvwL9c OrclResourceName=DISCOVERER5,orclReferenceName=portal.us.company.com,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContextorclpasswordattribute=Alu23B8w
You need to stop all SQL*Plus connections to the PORTAL
schema before dropping it, or else the attempt to drop it will throw an ORA-01940: cannot drop a user that is currently connected
.
You can stop the middle tier services by running the following command from the TEST_MIDTIER_HOME
/opmn/bin
directory of the middle-tier instance and the TEST_INFRA_HOME
/opmn/bin
directory of Identity Management:
opmnctl stopall
Drop the OracleAS Portal-related schemas from the test product Metadata Repository just installed.
Then, log in to SQL*Plus as the SYS
user and drop the OracleAS Portal schemas from the product Metadata Repository:
SQL> DROP USER portal_public cascade; User dropped. SQL> DROP USER portal_app cascade; User dropped. SQL> DROP USER portal_demo cascade; User dropped. SQL> DROP USER portal cascade; User dropped. SQL> DROP USER discoverer5 cascade; User dropped.
Prepare the product Metadata Repository in the test environment for import:
Create or alter tablespaces in the test database.
Check that the required tablespaces exist in the test database. The tablespaces in the test database must at least include all the ones listed from the production instance. To list all the tablespaces on the target, run the following from SQL*Plus as the SYS
user:
SET PAGESIZE 65 SELECT TABLESPACE_NAME FROM DBA_TABLESPACES ORDER BY TABLESPACE_NAME;
The response looks similar to the following output
TABLESPACE_NAME ------------------------------ B2B_DT B2B_IDX B2B_LOB B2B_RT DCM DISCO_PTM5_CACHE DISCO_PTM5_META DSGATEWAY_TAB IAS_META OCATS OLTS_ATTRSTORE OLTS_BATTRSTORE OLTS_CT_STORE OLTS_DEFAULT OLTS_SVRMGSTORE PORTAL PORTAL_DOC PORTAL_IDX PORTAL_LOG SYSAUX SYSTEM TEMP UDDISYS_TS UNDOTBS1 USERS WCRSYS_TS 26 rows selected.
Comparing this example to the list of tablespaces obtained from the production instance, we see that we need to create the INDX
(permanent), PORTAL_TMP
(temporary) and DISCO_PTM5_TEMP
(temporary) tablespaces.
To create a new tablespace, use the CREATE TABLESPACE
or CREATE TEMPORARY TABLESPACE
commands in SQL*Plus, in the SYS
schema. For example:
SQL> CREATE TABLESPACE INDX 2 DATAFILE '/u01/app/oracle/product/oradata/orcl/indx.dbf' SIZE 20M AUTOEXTEND ON 3 DEFAULT STORAGE (INITIAL 1M NEXT 2M MINEXTENTS 2); Tablespace created. SQL> CREATE TEMPORARY TABLESPACE PORTAL_TMP 2 TEMPFILE '/u01/app/oracle/product/oradata/orcl/portal_tmp.dbf' SIZE 20M 3 AUTOEXTEND ON; Tablespace created. SQL> CREATE TEMPORARY TABLESPACE DISCO_PTM5_TEMP 2 TEMPFILE '/u01/app/oracle/product/oradata/asdb/disco_ptm5_temp.dbf' SIZE 20M 3 AUTOEXTEND ON; Tablespace created.
For any tablespaces that already exist in the target database, it is recommended that they be set to autoextend or they must be sized large enough to hold the imported PORTAL
schemas. You can use the following script to enable autoextend on all datafiles:
SET DEFINE OFF SET HEAD OFF SET LINES 4000 SPOOL DATAFILES.SQL SELECT 'ALTER DATABASE DATAFILE '''||FILE_NAME||''' AUTOEXTEND ON;' FROM DBA_DATA_FILES ; SPOOL OFF
At this point, you can edit out any extraneous lines captured in the spool file DATAFILES.SQL
, and then run it in the SYS
schema to alter the data files:
@DATAFILES.SQL
From the production middle-tier instance, run SQL*Plus connect to the test Metadata Repository as the SYS
user with SYSDBA
privileges, and re-create the PORTAL
schema by running the wbisys.sql
script from the PRODUCTION_MIDTIER_HOME
/portal/admin/plsql/wwv
directory.
CONNECT SYS/password AS SYSDBA@test_metadata_repository_net_ervice_name @wdbisys.sql PORTAL portal_default_tablespace portal_temporary_tablespace wdbisys.log
In the syntax, test_metadata_repository_net_service_name
specifies the net service name for the product Metadata Repository in the test environment, as defined in the tnsnames.ora
file.
The response looks similar to the following output:
SQL> @wdbisys.sql PORTAL PORTAL PORTAL_TMP wdbisys.log B E G I N S Y S I N S T A L L ...start: Saturday 26 February , 2005 12:01:23 I. CREATE USER PORTAL and GRANT PRIVS old 1: create user &&1 identified by &&1 new 1: create user PORTAL identified by PORTAL User created. … PL/SQL procedure successfully completed. ...end of SYS install Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.1 - Production With the Partitioning, OLAP and Data Mining options
Change the PORTAL
password as the SYS
user:
ALTER USER PORTAL IDENTIFIED BY password_from_test_oid;
This command creates the PORTAL
schema and grants all of the necessary privileges.
For example:
SQL> ALTER USER PORTAL IDENTIFIED BY DTFDKD58; User altered
Create the PORTAL_PUBLIC
schema.
Change to the PRODUCTION_MIDTIER_HOME
/portal/admin/plsql/wws
directory and run the following script as the SYS
user:
CONNECT SYS/password AS SYSDBA@test_metadata_repository_net_ervice_name @cruser.sql PORTAL PORTAL portal_default_tablespace portal_temporary_tablespace
The response looks similar to the following output:
SQL> @cruser.sql PORTAL PORTAL PORTAL PORTAL_TMP old 1: select ('&&1'||'_public') t1 from dual new 1: select ('PORTAL'||'_public') t1 from dual ... User altered. No errors. Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.1 - Production With the Partitioning, OLAP and Data Mining options SQL*Plus: Release 10.1.0.3.0 - Production on Tue Mar 1 08:28:17 2005� Copyright (c) 1982, 2005, Oracle. All rights reserved.� Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.3.1 - Production With the Partitioning, OLAP and Data Mining options
SQL> ALTER USER PORTAL_PUBLIC IDENTIFIED BY password_from_oid;
Change the PORTAL_PUBLIC
password as the SYS
user:
For example:
SQL> ALTER USER PORTAL_PUBLIC IDENTIFIED BY X9g57NMP;
User altered.
Create the auxiliary schemas as the SYS
user.
Check the list of schemas that will be imported from Step 3a of "Prerequisite Export Steps". If the schemas already exist in the test database, then drop them. Before dropping any schemas, ensure that those schemas are not in use by other applications. To create the new schemas, use the following syntax:
SQL> GRANT CONNECT, RESOURCE TOschema
IDENTIFIED BYpassword
; ALTER USER username DEFAULT TABLESPACE default_tablespace TEMPORARY TABLESPACE temporary_tablespace;
You must create a schema for each schema in the list from Step 3a. Use the ALTER USER
command to adjust any user properties as necessary. For instance, the default and temporary tablespaces should be set to the ones specified by the results from the query in Step 3a.
When creating the PORTAL_APP
, PORTAL_DEMO
, or DISCOVERER5
schemas, use the passwords you extracted from Oracle Internet Directory.
Following is an example for the standard schemas:
GRANT CONNECT,RESOURCE TO portal_app IDENTIFIED BY IGqvwL9c; ALTER USER portal_app default tablespace PORTAL temporary tablespace PORTAL_TMP; GRANT CONNECT,RESOURCE TO portal_demo IDENTIFIED BY ZyKR9805; ALTER USER portal_demo default tablespace PORTAL temporary tablespace PORTAL_TMP; GRANT CONNECT,RESOURCE TO discoverer5 IDENTIFIED BY AHb10z3b; ALTER USER discoverer5 default tablespace PORTAL temporary tablespace PORTAL_TMP;
If Step 3c "Prerequisite Export Steps" returned any schemas that had snapshots in them, grant privilege to define snapshots to those schemas as the SYS
user:
SQL> GRANT CREATE SNAPSHOT TO schema;
where schema
contains the schemas returned by performing Step 3c. If the schemas returned are PORTAL_DEMO
and SCOTT
, then you would issue the following SQL commands:
SQL> GRANT CREATE SNAPSHOT TO PORTAL_DEMO; SQL> GRANT CREATE SNAPSHOT TO SCOTT;
For an Oracle Standard Edition Database only, prepare the database for import�:
Run as the PORTAL
user to initialize the portal login trigger for import:
cd TEST_INFRA_HOME/portal/admin/plsql/wwhost sqlplus portal/password@[portal_net_service_name] SQL> @insttrig.sql PORTAL
Set the NLS_LANG
environment variable to reflect the character set of the data in the dump file.
By properly identifying the character set encoding, the database will be able to apply the proper conversion to convert the data to the database character set encoding.
In the following example, the database character set on the product Metadata Repository in the production environment was WE8MSWIN1252
. The syntax for setting the NLS_LANG
environment variable for this character set is as follows:
setenv NLS_LANG AMERICAN_AMERICA.WE8MSWIN1252
Run the catexp.sql
script from the TEST_INFRA_HOME
/rdbms/admin
directory with SYSDBA
privileges:
CONNECT SYS/password AS SYSDBA @catexp.sql
This script creates the data dictionary and public synonyms for many of its views.
Import schemas into the product Metadata Repository in the test environment:
Run the Import utility.
Make sure that the database version that you are importing into is the same or a later version than the database you exported from. The actual import is done with the database imp
command as follows, with the example showing just the four core OracleAS Portal schemas. Include any other schemas identified in the SELECT
statement from Step 3a of "Prerequisite Export Steps".
TEST_INFRA_HOME/bin/imp portal/portal_password@portal_net_service_name file=/tmp/portal_exp.dmp grants=y log=/tmp/portal_imp.log fromuser=portal,portal_app,portal_demo,portal_public touser=portal,portal_app,portal_demo,portal_public
Enter the list of schemas as a comma-separated list.
You can ignore the following errors in the import log:
IMP-00015: following statement failed because the object already exists: IMP-00041: Warning: object created with compilation warnings. IMP-00061: Warning: Object type "SYS"."ODCIPARTINFO" already exists with a different identifier IMP-00092: Java object "SCOTT"."DeleteDbc" already exists, cannot be created
If you get other errors, such as missing tablespaces or missing schemas, you need to restore your database from your offline backup and repeat the import step after resolving the issue. Resolve this issue as indicated by the error message. For example, if a referenced schema was missing, then include that schema in the export and re-export the set of schemas from the production server. The schema needs to be created in the test system prior to import, as described in Step 3f of this procedure. This may typically happen for schemas that are not automatically identified by the SELECT
statement in Step 3a of "Prerequisite Export Steps".
Resolve any job conflict errors in the import log.
The following example shows an example of a job conflict in the import log:
IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>15,WHAT=>'begin execute immediate" "''begin wwctx_sso.cleanup_sessions( p_hours_old => 168 ); e" "nd;'' ; exception when others then null; end;',NEXT_DATE=>" "TO_DATE('2005-03-23:14:11:08','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'SYSDATE +" " 24/24',NO_PARSE=>TRUE); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_JOB", line 97 ORA-06512: at line 1
If such an error is seen during the import, then run the following as the SYS
user and look for conflicting jobs:
SELECT * FROM dba_jobs;
After identifying conflicts, revert to the backup created in Step 4 of "Task 3: Set Up Test Product Metadata Repository".
For each conflict identified in the previous step (Step 3b), update the job with a new job ID. For example:
UPDATE dba_jobs SET job=99 WHERE job=15;
Using that backup, resolve the conflicts, such as updating job IDs, and rerun all the steps of Task 4: Copy Data from the Production Product Metadata Repository to the Test Environment.
If you see the following error in the import log:
IMP-00017: following statement failed with ORACLE error 604: "ALTER TABLE "WWPRO_ADAPTER_KEY$" ENABLE CONSTRAINT "WWSEC_ADAPTER_KEY$_CK1""
Then, as the SYS
user, run the SELECT
statement to find the owners of these tables. Next, run the ALTER
statement to prefix the schema to the table name, using the statement identified within the quotes ("
):
SQL> SELECT OWNER FROM dba_tables WHERE table_name = 'WWPRO_ADAPTER_KEY$' ------------------------------ ORASSO PORTAL SQL> ALTER TABLE orasso.wwpro_adapter_key$ DISABLE constraint wwsec_adapter_key$_ck1; Table altered. SQL> ALTER TABLE portal.wwpro_adapter_key$ DISABLE constraint wwsec_adapter_key$_ck1; Table altered.
Compile all the invalid objects from the imported schemas.
Run the following as the SYS
user:
TEST_INFRA_HOME/rdbms/admin/utlrp.sql
If the following query in the PORTAL
schema returns more than PORTAL_PUBLIC
:
SELECT DISTINCT DB_USER FROM WWSEC_PERSON$;
Then, execute the following commands from SQL*Plus as the PORTAL
user:
SET HEAD OFF SET LINES 4000 SPOOL DBUSERS.SQL SELECT DISTINCT 'ALTER USER '||DB_USER ||' GRANT CONNECT THROUGH PORTAL;' FROM WWSEC_PERSON$; SPOOL OFF
Run DBUSERS.SQL
in the target portal instance to grant connect through privilege� to database users associated with portal users. In most cases, this will simply be PORTAL_PUBLIC
, which already has the necessary grant.
Drop the temporary login trigger.
This step is only necessary if the test server is an OracleAS Standard Edition and you performed Step 2h of this procedure; this step is not needed for OracleAS Enterprise Edition.
Change directories to TEST_INFRA_HOME
/portal/admin/plsql/wwhost
and run the following script from SQL*Plus in the PORTAL
schema:
@droptrig.sql PORTAL
Perform a backup of the test product Metadata Repository, Identity Management configuration files, and Identity Management Metadata Repository with the OracleAS Backup and Recovery Tool. See Chapter 20.
Task 5: Install the Test Middle Tier
To install the test middle-tier instances and configure them to use the test Identity Management according to what you want to test:
Install Oracle Application Server using Oracle Universal Installer.
From the Select a Product to Install screen, choose the Portal and Wireless middle for environments with OracleAS Portal or the Business Intelligence and Forms for environments with OracleBI Discoverer.
Install any required patches to bring the test system up to the same patch-level release as the production system. Use the Oracle Universal Installer on the production system to determine the installed patches.
Perform a backup of the test middle tier, product Metadata Repository, Identity Management configuration files, and Identity Management Metadata Repository with the OracleAS Backup and Recovery Tool. See Chapter 20.
For 10.1.2.0.2 environments in which both the production and test environments are running on the same operating system, you can either perform a middle-tier installation or clone the production middle-tier instance. To clone the production middle-tier instance, perform tasks in procedure Section 10.4, "Cloning Oracle Application Server Instances".
Task 6: Deploy Applications to the Test Middle Tier
Deploy J2EE application EAR files to the test middle tier. You can use one of the following mechanisms:
Deploy J2EE application EAR files to the test middle tier. You can use one of the following mechanisms:
Use the DCM deployApplication
command
Navigate to the OC4J Home page -> Applications tab in Oracle Enterprise Manager 10g Application Server Control Console, and click Deploy EAR file.
See Also:
|
Copy the configuration settings from the production environment to the test environment for all OracleAS Portal and OracleBI Discoverer applications.
For OracleBI Discoverer, you need to copy the following files to the production environment:
configuration.xml
and configuration.xsd
files
(10.1.2.0.2 on Unix) ORACLE_HOME/discoverer/config
(9.0.4.1 on Unix) ORACLE_HOME/j2ee/OC4J_BI_FORMS/applications/discoverer/web/WEB-INF
(10.1.2.0.2 on Windows) ORACLE_HOME\discoverer\config
(9.0.4.1 on Windows) ORACLE_HOME\j2ee\OC4J_BI_FORMS\applications\discoverer\web\WEB-INF
(10.1.2.0.2 on Unix) ORACLE_HOME/j2ee/OC4J_BI_Forms/applications/discoverer/discoverer/WEB-INF (9.0.4.1 on Unix) ORACLE_HOME\j2ee\OC4J_BI_FORMS\applications\discoverer\web\WEB-INF (10.1.2.0.2 on Windows) ORACLE_HOME\j2ee\OC4J_BI_Forms\applications\discoverer\discoverer\WEB-INF (9.0.4.1 on Windows) ORACLE_HOME\j2ee\OC4J_BI_FORMS\applications\discoverer\web\WEB-INF
(10.1.2.0.2 on UNIX) ORACLE_HOME/discoverer (10.1.2.0.2 on Windows) ORACLE_HOME\discoverer
prefs.txt
is not available in 9.0.4.1 environments.
(10.1.2.0.2 and 9.0.4.1 on UNIX) ORACLE_HOME/discoverer (10.1.2.0.2 and 9.0.4.1 on Windows) ORACLE_HOME\discoverer
You cannot copy this file across different operating systems. If this migration is happening between little-endian to big-endian operating systems, then run the convertreg.pl
script:
perl convertreg.pl test_reg_key.dc_file production_reg_key.dc_file
You can use the PERL that installs with Oracle Application Server. You can find convertreg.pl
in the following directory:
(UNIX) ORACLE_HOME/discovere/util (Windows) ORACLE_HOME\discoverer\util
Windows and Linux are examples of little-endian operating systems and Solaris and HP-UX are examples of big-endian operating systems. Do not use this utility when moving from little-endian to little-endian operating systems or big-endian to big-endian operating systems. For these migrations, omit this step (in which case the destination system will have a default set of user preferences).
Note: Theconvertreg.pl script does not exist on Windows in releases 9.0.4.1. For environments supporting this release, download the patch for this utility. The related Automated Release Updates (ARU) number is ARU 7462620.
You can download this patches from OracleMetalink: |
If OracleAS Portal is pointing to a J2EE application provider, then if desired, deploy these applications to the test middle tier and configure OracleAS Portal to point to the test middle tier.
Perform a backup of the test middle tier configuration files, product Metadata Repository, Identity Management configuration files, and Identity Management Metadata Repository with the OracleAS Backup and Recovery Tool. See Chapter 20.
Remove any old backups.
Task 7: Copy OracleBI Discoverer Data
To copy the OracleBI Discoverer data:
Create a new database schema containing an empty EUL:
TEST_MIDTIER_HOME/bin/eulapi -connect dba/password@database_service_name -create_eul [-apps_mode] -default_tablespace default_tablespace_name -temporary_tablespace temp_tablespace_name -user new_eul_schema -password new_eul_schema_password -log /tmp/createeul.log
Use the –apps_mode
flag if you are creating a copy of an Oracle Applications mode EUL.
If you intend to copy data to another customer database in the test environment, use the service name of the test database for database_service_name
.
Import the copied contents into the new EUL schema:
TEST_MIDTIER_HOME/bin/eulapi -connect new_eul_owner/password@database_service_name -import /tmp/file1.eex -log /tmp/imp.log
The mapping of source EUL schemas to new EUL schemas should be noted in order to support the migration of Discoverer connections as detailed in the next section.
Note: Theeulapi utility does not exist on Windows in releases 9.0.4.1 and 10.1.2.0.2. For environments supporting these releases, download the patches for this utility. The related Automated Release Updates (ARU) number for 9.0.4.1 is ARU 7462620 and 10.1.2.0.2 is ARU 7462623.
You can download these patches from OracleMetalink: |
Task 8: Cleanup the Test Environment
To cleanup the test environment prior to using it:
On the test environment, start the middle-tier services:
Set environment to the middle tier home.
cd TEST_MIDTIER_HOME/opmn/bin
opmnctl startall
Rewire this middle tier to the infrastructure to update the association with Oracle Internet Directory and the directory server, as well as the OracleAS Web Cache settings. The imported Metadata Repository also has references to the production servers. This step will update all those references to use the appropriate server names for the test environment.
Set your environment to the middle-tier Oracle home:
cd TEST_MIDTIER_HOME/portal/conf
./ptlconfig -dad portal
Portal Dependency Settings Tool
Processing Portal instance '/pls/portal' (cn=asdb,cn=oraclecontext)
Processing complete
If your OracleAS Single Sign-On migration data includes OracleBI Discoverer connection data, perform the following to update the connection data�
Connect to the test Identity Management database as ORASSO
and run the following SQL:
SQL> UPDATE ORASSO_DS.WWSSO_PSEX_USER_INFO$ SET FVAL2 = '&NewEulName' WHERE FNAME2 = 'eul' and FVAL2 = '&OldEulName' and FNAME5 = 'database' and FVAL5 = '&DbConnectString';
Where:
&NewEULName
is the name of the new EUL that was created after copying the EUL
&OldEulName
is the name of the production EUL
&DbConnectString
is the net service name or the connect descriptor for the database where the EUL resides and that was used when creating the Discoverer Connection pointing to this EUL
Note: When you deploy OracleBI Discoverer in non-OracleAS Single Sign-On mode, users create private connections, which are stored in the PSTORE. However, OracleBI Discoverer needs to store a cookie on the browser for identifying the browser that created the connections. These cookies will not be available to the test server and users will not be able to get to their non-OracleAS Single Sign-On private connections stored in PSTORE. However, users can re-create these connections and all existing workbooks will be preserved. |
To copy the OLAP connections, connect to the Identity Management database as ORASSO
and run the following SQL:
SQL> UPDATE ORASSO_DS.WWSSO_PSEX_USER_INFO$ SET FVAL4 = '&New_Db_Details' WHERE user_prefs LIKE '%olap_zone=true%' AND FNAME4 = 'database' and FVAL4 = '&Old_Db_Details';
Where:
&New_Db_Details
is in the format:
computer
:
listener_port
:
database_SID
For example, testserver.us.company.com:1521:testdb
&Old_Db_Details
is in the format:
computer
:
listener_port
:
database_SID
For example, prodserver.us.company.com:1521:proddb
Cleanup OracleAS Single Sign-On data:
After the OracleAS Single Sign-On import, partner applications registered with the production system are also imported to the test environment. This includes the partner registrations with the production host name. You should remove all of these.
Log into the OracleAS Single Sign-On system by accessing the following URL:
http://test_infra.domain.com:port/pls/orasso
Click SSO Server Administration.
Click Administer Partner Applications.
Scan the list of links by holding the mouse over the partner application name, and delete all the entries that pertain to the production host. Corresponding entries should now exist for the test host.
Change the registration of Discoverer Portlet Provider on the test OracleAS Portal instance.
Modify the registration of the Discoverer Portlet Provider in the OracleAS Portal instance and change the URL for the provider from:
http://production_host:production_OHS_port/discoverer/portletprovider
To:
http://test_host:production_OHS_port/discoverer/portletprovider
Log in to OracleAS Portal as the administrator (for example, PORTAL
).
Click the Administer tab.
Click the Portlets sub-tab.
In the Remote Providers portlet, enter the Discoverer Portlet Provider name in the Name field. Click Edit.
Click the Connection tab.
In the URL field, update the port to the new port number. Click Apply.
Once you establish the test environment, use it for testing upgrades or applying patchsets.
After creating the test environment, consider transforming it to a highly available installation.
See Also:
|