Contents
- Audience
- Documentation Accessibility
- Related Documentation
- Conventions
- New Features in Oracle Database 10g Release 2
- New Features in Oracle Database 10g Release 1
- Data Pump Components
- What New Features Do Data Pump Export and Import Provide?
- How Does Data Pump Access Data?
-
- Direct Path Loads and Unloads
- External Tables
- Accessing Data Over a Database Link
- What Happens During Execution of a Data Pump Job?
-
- Coordination of a Job
- Tracking Progress Within a Job
- Filtering Data and Metadata During a Job
- Transforming Metadata During a Job
- Maximizing Job Performance
- Loading and Unloading of Data
- Monitoring Job Status
-
- Monitoring the Progress of Executing Jobs
- File Allocation
-
- Specifying Files and Adding Additional Dump Files
- Default Locations for Dump, Log, and SQL Files
-
- Using Directory Objects When Automatic Storage Management Is Enabled
- Setting Parallelism
- Using Substitution Variables
- Moving Data Between Different Database Versions
- Original Export and Import Versus Data Pump Export and Import
- What Is Data Pump Export?
- Invoking Data Pump Export
-
- Data Pump Export Interfaces
- Data Pump Export Modes
-
- Full Export Mode
- Schema Mode
- Table Mode
- Tablespace Mode
- Transportable Tablespace Mode
- Network Considerations
- Filtering During Export Operations
-
- Data Filters
- Metadata Filters
- Parameters Available in Export's Command-Line Mode
-
- ATTACH
- COMPRESSION
- CONTENT
- DIRECTORY
- DUMPFILE
- ENCRYPTION_PASSWORD
- ESTIMATE
- ESTIMATE_ONLY
- EXCLUDE
- FILESIZE
- FLASHBACK_SCN
- FLASHBACK_TIME
- FULL
- HELP
- INCLUDE
- JOB_NAME
- LOGFILE
- NETWORK_LINK
- NOLOGFILE
- PARALLEL
- PARFILE
- QUERY
- SAMPLE
- SCHEMAS
- STATUS
- TABLES
- TABLESPACES
- TRANSPORT_FULL_CHECK
- TRANSPORT_TABLESPACES
- VERSION
- How Data Pump Export Parameters Map to Those of the Original Export Utility
- Commands Available in Export's Interactive-Command Mode
-
- ADD_FILE
- CONTINUE_CLIENT
- EXIT_CLIENT
- FILESIZE
- HELP
- KILL_JOB
- PARALLEL
- START_JOB
- STATUS
- STOP_JOB
- Examples of Using Data Pump Export
-
- Performing a Table-Mode Export
- Data-Only Unload of Selected Tables and Rows
- Estimating Disk Space Needed in a Table-Mode Export
- Performing a Schema-Mode Export
- Performing a Parallel Full Database Export
- Using Interactive Mode to Stop and Reattach to a Job
- Syntax Diagrams for Data Pump Export
- What Is Data Pump Import?
- Invoking Data Pump Import
-
- Data Pump Import Interfaces
- Data Pump Import Modes
-
- Full Import Mode
- Schema Mode
- Table Mode
- Tablespace Mode
- Transportable Tablespace Mode
- Network Considerations
- Filtering During Import Operations
-
- Data Filters
- Metadata Filters
- Parameters Available in Import's Command-Line Mode
-
- ATTACH
- CONTENT
- DIRECTORY
- DUMPFILE
- ENCRYPTION_PASSWORD
- ESTIMATE
- EXCLUDE
- FLASHBACK_SCN
- FLASHBACK_TIME
- FULL
- HELP
- INCLUDE
- JOB_NAME
- LOGFILE
- NETWORK_LINK
- NOLOGFILE
- PARALLEL
- PARFILE
- QUERY
- REMAP_DATAFILE
- REMAP_SCHEMA
- REMAP_TABLESPACE
- REUSE_DATAFILES
- SCHEMAS
- SKIP_UNUSABLE_INDEXES
- SQLFILE
- STATUS
- STREAMS_CONFIGURATION
- TABLE_EXISTS_ACTION
- TABLES
- TABLESPACES
- TRANSFORM
- TRANSPORT_DATAFILES
- TRANSPORT_FULL_CHECK
- TRANSPORT_TABLESPACES
- VERSION
- How Data Pump Import Parameters Map to Those of the Original Import Utility
- Commands Available in Import's Interactive-Command Mode
-
- CONTINUE_CLIENT
- EXIT_CLIENT
- HELP
- KILL_JOB
- PARALLEL
- START_JOB
- STATUS
- STOP_JOB
- Examples of Using Data Pump Import
-
- Performing a Data-Only Table-Mode Import
- Performing a Schema-Mode Import
- Performing a Network-Mode Import
- Syntax Diagrams for Data Pump Import
- Data Performance Improvements for Data Pump Export and Import
- Tuning Performance
-
- Controlling Resource Consumption
- Initialization Parameters That Affect Data Pump Performance
-
- Setting the Size Of the Buffer Cache In a Streams Environment
- How Does the Client Interface to the Data Pump API Work?
-
- Job States
- What Are the Basic Steps in Using the Data Pump API?
- Examples of Using the Data Pump API
- SQL*Loader Features
- SQL*Loader Parameters
- SQL*Loader Control File
- Input Data and Datafiles
-
- Fixed Record Format
- Variable Record Format
- Stream Record Format
- Logical Records
- Data Fields
- LOBFILEs and Secondary Datafiles (SDFs)
- Data Conversion and Datatype Specification
- Discarded and Rejected Records
-
- The Bad File
-
- SQL*Loader Rejects
- Oracle Database Rejects
- The Discard File
- Log File and Logging Information
- Conventional Path Loads, Direct Path Loads, and External Table Loads
-
- Conventional Path Loads
- Direct Path Loads
-
- Parallel Direct Path
- External Table Loads
- Choosing External Tables Versus SQL*Loader
- Loading Objects, Collections, and LOBs
-
- Supported Object Types
-
- column objects
- row objects
- Supported Collection Types
-
- Nested Tables
- VARRAYs
- Supported LOB Types
- Partitioned Object Support
- Application Development: Direct Path Load API
- SQL*Loader Case Studies
-
- Case Study Files
- Running the Case Studies
- Case Study Log Files
- Checking the Results of a Case Study
- Invoking SQL*Loader
-
- Alternative Ways to Specify Parameters
- Command-Line Parameters
-
- BAD (bad file)
- BINDSIZE (maximum size)
- COLUMNARRAYROWS
- CONTROL (control file)
- DATA (datafile)
- DATE_CACHE
- DIRECT (data path)
- DISCARD (filename)
- DISCARDMAX (integer)
- ERRORS (errors to allow)
- EXTERNAL_TABLE
-
- Restrictions When Using EXTERNAL_TABLE
- FILE (file to load into)
- LOAD (records to load)
- LOG (log file)
- MULTITHREADING
- PARALLEL (parallel load)
- PARFILE (parameter file)
- READSIZE (read buffer size)
- RESUMABLE
- RESUMABLE_NAME
- RESUMABLE_TIMEOUT
- ROWS (rows per commit)
- SILENT (feedback mode)
- SKIP (records to skip)
- SKIP_INDEX_MAINTENANCE
- SKIP_UNUSABLE_INDEXES
- STREAMSIZE
- USERID (username/password)
- Exit Codes for Inspection and Display
- Control File Contents
-
- Comments in the Control File
- Specifying Command-Line Parameters in the Control File
-
- OPTIONS Clause
- Specifying Filenames and Object Names
-
- Filenames That Conflict with SQL and SQL*Loader Reserved Words
- Specifying SQL Strings
- Operating System Considerations
-
- Specifying a Complete Path
- Backslash Escape Character
- Nonportable Strings
- Using the Backslash as an Escape Character
- Escape Character Is Sometimes Disallowed
- Identifying XML Type Tables
- Specifying Datafiles
-
- Examples of INFILE Syntax
- Specifying Multiple Datafiles
- Identifying Data in the Control File with BEGINDATA
- Specifying Datafile Format and Buffering
- Specifying the Bad File
-
- Examples of Specifying a Bad File Name
- How Bad Files Are Handled with LOBFILEs and SDFs
- Criteria for Rejected Records
- Specifying the Discard File
-
- Specifying the Discard File in the Control File
- Specifying the Discard File from the Command Line
- Examples of Specifying a Discard File Name
- Criteria for Discarded Records
- How Discard Files Are Handled with LOBFILEs and SDFs
- Limiting the Number of Discarded Records
- Handling Different Character Encoding Schemes
-
- Multibyte (Asian) Character Sets
- Unicode Character Sets
- Database Character Sets
- Datafile Character Sets
- Input Character Conversion
-
- Considerations When Loading Data into VARRAYs or Primary-Key-Based REFs
- CHARACTERSET Parameter
- Control File Character Set
- Character-Length Semantics
- Interrupted Loads
-
- Discontinued Conventional Path Loads
- Discontinued Direct Path Loads
-
- Load Discontinued Because of Space Errors
- Load Discontinued Because Maximum Number of Errors Exceeded
- Load Discontinued Because of Fatal Errors
- Load Discontinued Because a Ctrl+C Was Issued
- Status of Tables and Indexes After an Interrupted Load
- Using the Log File to Determine Load Status
- Continuing Single-Table Loads
- Assembling Logical Records from Physical Records
-
- Using CONCATENATE to Assemble Logical Records
- Using CONTINUEIF to Assemble Logical Records
- Loading Logical Records into Tables
-
- Specifying Table Names
-
- INTO TABLE Clause
- Table-Specific Loading Method
-
- Loading Data into Empty Tables
- Loading Data into Nonempty Tables
- Table-Specific OPTIONS Parameter
- Loading Records Based on a Condition
-
- Using the WHEN Clause with LOBFILEs and SDFs
- Specifying Default Data Delimiters
-
- fields_spec
- termination_spec
- enclosure_spec
- Handling Short Records with Missing Data
-
- TRAILING NULLCOLS Clause
- Index Options
-
- SORTED INDEXES Clause
- SINGLEROW Option
- Benefits of Using Multiple INTO TABLE Clauses
-
- Extracting Multiple Logical Records
-
- Relative Positioning Based on Delimiters
- Distinguishing Different Input Record Formats
-
- Relative Positioning Based on the POSITION Parameter
- Distinguishing Different Input Row Object Subtypes
- Loading Data into Multiple Tables
- Summary
- Bind Arrays and Conventional Path Loads
-
- Size Requirements for Bind Arrays
- Performance Implications of Bind Arrays
- Specifying Number of Rows Versus Size of Bind Array
- Calculations to Determine Bind Array Size
-
- Determining the Size of the Length Indicator
- Calculating the Size of Field Buffers
- Minimizing Memory Requirements for Bind Arrays
- Calculating Bind Array Size for Multiple INTO TABLE Clauses
- Field List Contents
- Specifying the Position of a Data Field
-
- Using POSITION with Data Containing Tabs
- Using POSITION with Multiple Table Loads
- Examples of Using POSITION
- Specifying Columns and Fields
-
- Specifying Filler Fields
- Specifying the Datatype of a Data Field
- SQL*Loader Datatypes
-
- Nonportable Datatypes
-
- INTEGER(n)
- SMALLINT
- FLOAT
- DOUBLE
- BYTEINT
- ZONED
- DECIMAL
- VARGRAPHIC
- VARCHAR
- VARRAW
- LONG VARRAW
- Portable Datatypes
-
- CHAR
- Datetime and Interval Datatypes
- GRAPHIC
- GRAPHIC EXTERNAL
- Numeric EXTERNAL
- RAW
- VARCHARC
- VARRAWC
- Conflicting Native Datatype Field Lengths
- Field Lengths for Length-Value Datatypes
- Datatype Conversions
- Datatype Conversions for Datetime and Interval Datatypes
- Specifying Delimiters
-
- TERMINATED Fields
- ENCLOSED Fields
- Syntax for Termination and Enclosure Specification
- Delimiter Marks in the Data
- Maximum Length of Delimited Data
- Loading Trailing Blanks with Delimiters
- Conflicting Field Lengths for Character Datatypes
-
- Predetermined Size Fields
- Delimited Fields
- Date Field Masks
- Specifying Field Conditions
-
- Comparing Fields to BLANKS
- Comparing Fields to Literals
- Using the WHEN, NULLIF, and DEFAULTIF Clauses
-
- Examples of Using the WHEN, NULLIF, and DEFAULTIF Clauses
- Loading Data Across Different Platforms
- Byte Ordering
-
- Specifying Byte Order
- Using Byte Order Marks (BOMs)
-
- Suppressing Checks for BOMs
- Loading All-Blank Fields
- Trimming Whitespace
-
- Datatypes for Which Whitespace Can Be Trimmed
- Specifying Field Length for Datatypes for Which Whitespace Can Be Trimmed
-
- Predetermined Size Fields
- Delimited Fields
- Relative Positioning of Fields
-
- No Start Position Specified for a Field
- Previous Field Terminated by a Delimiter
- Previous Field Has Both Enclosure and Termination Delimiters
- Leading Whitespace
-
- Previous Field Terminated by Whitespace
- Optional Enclosure Delimiters
- Trimming Trailing Whitespace
- Trimming Enclosed Fields
- How the PRESERVE BLANKS Option Affects Whitespace Trimming
-
- How [NO] PRESERVE BLANKS Works with Delimiter Clauses
- Applying SQL Operators to Fields
-
- Referencing Fields
- Common Uses of SQL Operators in Field Specifications
- Combinations of SQL Operators
- Using SQL Strings with a Date Mask
- Interpreting Formatted Fields
- Using SQL Strings to Load the ANYDATA Database Type
- Using SQL*Loader to Generate Data for Input
-
- Loading Data Without Files
- Setting a Column to a Constant Value
-
- CONSTANT Parameter
- Setting a Column to an Expression Value
-
- EXPRESSION Parameter
- Setting a Column to the Datafile Record Number
-
- RECNUM Parameter
- Setting a Column to the Current Date
-
- SYSDATE Parameter
- Setting a Column to a Unique Sequence Number
-
- SEQUENCE Parameter
- Generating Sequence Numbers for Multiple Tables
-
- Example: Generating Different Sequence Numbers for Each Insert
- Loading Column Objects
-
- Loading Column Objects in Stream Record Format
- Loading Column Objects in Variable Record Format
- Loading Nested Column Objects
- Loading Column Objects with a Derived Subtype
- Specifying Null Values for Objects
-
- Specifying Attribute Nulls
- Specifying Atomic Nulls
- Loading Column Objects with User-Defined Constructors
- Loading Object Tables
-
- Loading Object Tables with a Subtype
- Loading REF Columns
-
- System-Generated OID REF Columns
- Primary Key REF Columns
- Unscoped REF Columns That Allow Primary Keys
- Loading LOBs
-
- Loading LOB Data from a Primary Datafile
-
- LOB Data in Predetermined Size Fields
- LOB Data in Delimited Fields
- LOB Data in Length-Value Pair Fields
- Loading LOB Data from LOBFILEs
-
- Dynamic Versus Static LOBFILE Specifications
- Examples of Loading LOB Data from LOBFILEs
- Considerations When Loading LOBs from LOBFILEs
- Loading BFILE Columns
- Loading Collections (Nested Tables and VARRAYs)
-
- Restrictions in Nested Tables and VARRAYs
- Secondary Datafiles (SDFs)
- Dynamic Versus Static SDF Specifications
- Loading a Parent Table Separately from Its Child Table
-
- Memory Issues When Loading VARRAY Columns
- Data Loading Methods
-
- Loading ROWID Columns
- Conventional Path Load
-
- Conventional Path Load of a Single Partition
- When to Use a Conventional Path Load
- Direct Path Load
-
- Data Conversion During Direct Path Loads
- Direct Path Load of a Partitioned or Subpartitioned Table
- Direct Path Load of a Single Partition or Subpartition
- Advantages of a Direct Path Load
- Restrictions on Using Direct Path Loads
- Restrictions on a Direct Path Load of a Single Partition
- When to Use a Direct Path Load
- Integrity Constraints
- Field Defaults on the Direct Path
- Loading into Synonyms
- Using Direct Path Load
-
- Setting Up for Direct Path Loads
- Specifying a Direct Path Load
- Building Indexes
-
- Improving Performance
- Temporary Segment Storage Requirements
- Indexes Left in an Unusable State
- Using Data Saves to Protect Against Data Loss
-
- Using the ROWS Parameter
- Data Save Versus Commit
- Data Recovery During Direct Path Loads
-
- Media Recovery and Direct Path Loads
- Instance Recovery and Direct Path Loads
- Loading Long Data Fields
-
- Loading Data As PIECED
- Optimizing Performance of Direct Path Loads
-
- Preallocating Storage for Faster Loading
- Presorting Data for Faster Indexing
-
- SORTED INDEXES Clause
- Unsorted Data
- Multiple-Column Indexes
- Choosing the Best Sort Order
- Infrequent Data Saves
- Minimizing Use of the Redo Log
-
- Disabling Archiving
- Specifying the SQL*Loader UNRECOVERABLE Clause
- Setting the SQL NOLOGGING Parameter
- Specifying the Number of Column Array Rows and Size of Stream Buffers
- Specifying a Value for the Date Cache
- Optimizing Direct Path Loads on Multiple-CPU Systems
- Avoiding Index Maintenance
- Direct Loads, Integrity Constraints, and Triggers
-
- Integrity Constraints
-
- Enabled Constraints
- Disabled Constraints
- Reenable Constraints
- Database Insert Triggers
-
- Replacing Insert Triggers with Integrity Constraints
- When Automatic Constraints Cannot Be Used
- Preparation
- Using an Update Trigger
- Duplicating the Effects of Exception Conditions
- Using a Stored Procedure
- Permanently Disabled Triggers and Constraints
- Increasing Performance with Concurrent Conventional Path Loads
- Parallel Data Loading Models
-
- Concurrent Conventional Path Loads
- Intersegment Concurrency with Direct Path
- Intrasegment Concurrency with Direct Path
- Restrictions on Parallel Direct Path Loads
- Initiating Multiple SQL*Loader Sessions
- Parameters for Parallel Direct Path Loads
-
- Using the FILE Parameter to Specify Temporary Segments
- Enabling Constraints After a Parallel Direct Path Load
- PRIMARY KEY and UNIQUE KEY Constraints
- General Performance Improvement Hints
- How Are External Tables Created?
-
- Access Parameters
- Location of Datafiles and Output Files
- Example: Creating and Loading an External Table Using ORACLE_LOADER
- Using External Tables to Load and Unload Data
-
- Loading Data
- Unloading Data Using the ORACLE_DATAPUMP Access Driver
- Dealing with Column Objects
- Datatype Conversion During External Table Use
- Parallel Access to External Tables
-
- Parallel Access with ORACLE_LOADER
- Parallel Access with ORACLE_DATAPUMP
- Performance Hints When Using External Tables
-
- Performance Hints Specific to the ORACLE_LOADER Access Driver
- External Table Restrictions
-
- Restrictions Specific to the ORACLE_DATAPUMP Access Driver
- Behavior Differences Between SQL*Loader and External Tables
-
- Multiple Primary Input Datafiles
- Syntax and Datatypes
- Byte-Order Marks
- Default Character Sets, Date Masks, and Decimal Separator
- Use of the Backslash Escape Character
- access_parameters Clause
- record_format_info Clause
-
- FIXED length
- VARIABLE size
- DELIMITED BY
- CHARACTERSET
- LANGUAGE
- TERRITORIES
- DATA IS...ENDIAN
- BYTEORDERMARK (CHECK | NOCHECK)
- STRING SIZES ARE IN
- LOAD WHEN
- BADFILE | NOBADFILE
- DISCARDFILE | NODISCARDFILE
- LOG FILE | NOLOGFILE
- SKIP
- READSIZE
- DATE_CACHE
- string
- condition_spec
- [directory object name:] filename
- condition
-
- range start : range end
- field_definitions Clause
-
- delim_spec
-
- Example: External Table with Terminating Delimiters
- Example: External Table with Enclosure and Terminator Delimiters
- Example: External Table with Optional Enclosure Delimiters
- trim_spec
- MISSING FIELD VALUES ARE NULL
- field_list
- pos_spec Clause
-
- start
- *
- increment
- end
- length
- datatype_spec Clause
-
- [UNSIGNED] INTEGER [EXTERNAL] [(len)]
- DECIMAL [EXTERNAL] and ZONED [EXTERNAL]
- ORACLE_DATE
- ORACLE_NUMBER
- Floating-Point Numbers
- DOUBLE
- FLOAT [EXTERNAL]
- BINARY_DOUBLE
- BINARY_FLOAT
- RAW
- CHAR
- date_format_spec
- VARCHAR and VARRAW
- VARCHARC and VARRAWC
- init_spec Clause
- column_transforms Clause
-
- transform
-
- column_name
- NULL
- CONSTANT
- CONCAT
- LOBFILE
- lobfile_attr_list
- Reserved Words for the ORACLE_LOADER Access Driver
- access_parameters Clause
-
- comments
- LOGFILE | NOLOGFILE
-
- Filenames for LOGFILE
- Example of LOGFILE Usage for ORACLE_DATAPUMP
- VERSION Clause
- Effects of Using the SQL ENCRYPT Clause
- Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver
-
- Parallel Loading and Unloading
- Combining Dump Files
- Supported Datatypes
- Unsupported Datatypes
-
- Unloading and Loading BFILE Datatypes
- Unloading LONG and LONG RAW Datatypes
- Unloading and Loading Columns Containing Final Object Types
- Tables of Final Object Types
- Reserved Words for the ORACLE_DATAPUMP Access Driver
- Using DBVERIFY to Validate Disk Blocks of a Single Datafile
-
- Syntax
- Parameters
- Command-Line Interface
- Sample DBVERIFY Output
- Using DBVERIFY to Validate a Segment
-
- Syntax
- Parameters
- Command-Line Interface
- What Is the DBNEWID Utility?
- Ramifications of Changing the DBID and DBNAME
-
- Considerations for Global Database Names
- Changing the DBID and DBNAME of a Database
-
- Changing the DBID and Database Name
- Changing Only the Database ID
- Changing Only the Database Name
- Troubleshooting DBNEWID
- DBNEWID Syntax
-
- Parameters
- Restrictions and Usage Notes
- Additional Restrictions for Releases Prior to Oracle Database 10g
- LogMiner Benefits
- Introduction to LogMiner
-
- LogMiner Configuration
-
- Sample Configuration
- Requirements
- Directing LogMiner Operations and Retrieving Data of Interest
- LogMiner Dictionary Files and Redo Log Files
-
- LogMiner Dictionary Options
-
- Using the Online Catalog
- Extracting a LogMiner Dictionary to the Redo Log Files
- Extracting the LogMiner Dictionary to a Flat File
- Redo Log File Options
- Starting LogMiner
- Querying V$LOGMNR_CONTENTS for Redo Data of Interest
-
- How the V$LOGMNR_CONTENTS View Is Populated
- Querying V$LOGMNR_CONTENTS Based on Column Values
-
- The Meaning of NULL Values Returned by the MINE_VALUE Function
- Usage Rules for the MINE_VALUE and COLUMN_PRESENT Functions
- Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS
-
- Showing Only Committed Transactions
- Skipping Redo Corruptions
- Filtering Data by Time
- Filtering Data by SCN
- Formatting Reconstructed SQL Statements for Reexecution
- Formatting the Appearance of Returned Data for Readability
- Reapplying DDL Statements Returned to V$LOGMNR_CONTENTS
- Calling DBMS_LOGMNR.START_LOGMNR Multiple Times
- Supplemental Logging
-
- Database-Level Supplemental Logging
-
- Minimal Supplemental Logging
- Database-Level Identification Key Logging
- Disabling Database-Level Supplemental Logging
- Table-Level Supplemental Logging
-
- Table-Level Identification Key Logging
- Table-Level User-Defined Supplemental Log Groups
- Usage Notes for User-Defined Supplemental Log Groups
- Tracking DDL Statements in the LogMiner Dictionary
- DDL_DICT_TRACKING and Supplemental Logging Settings
- DDL_DICT_TRACKING and Specified Time or SCN Ranges
- Accessing LogMiner Operational Information in Views
-
- Querying V$LOGMNR_LOGS
- Querying Views for Supplemental Logging Settings
- Steps in a Typical LogMiner Session
-
- Enable Supplemental Logging
- Extract a LogMiner Dictionary
- Specify Redo Log Files for Analysis
- Start LogMiner
- Query V$LOGMNR_CONTENTS
- End the LogMiner Session
- Examples Using LogMiner
-
- Examples of Mining by Explicitly Specifying the Redo Log Files of Interest
-
- Example 1: Finding All Modifications in the Last Archived Redo Log File
- Example 2: Grouping DML Statements into Committed Transactions
- Example 3: Formatting the Reconstructed SQL
- Example 4: Using the LogMiner Dictionary in the Redo Log Files
- Example 5: Tracking DDL Statements in the Internal Dictionary
- Example 6: Filtering Output by Time Range
- Examples of Mining Without Specifying the List of Redo Log Files Explicitly
-
- Example 1: Mining Redo Log Files in a Given Time Range
- Example 2: Mining the Redo Log Files in a Given SCN Range
- Example 3: Using Continuous Mining to Include Future Values in a Query
- Example Scenarios
-
- Scenario 1: Using LogMiner to Track Changes Made by a Specific User
- Scenario 2: Using LogMiner to Calculate Table Access Statistics
- Supported Datatypes, Storage Attributes, and Database and Redo Log File Versions
-
- Supported Datatypes and Table Storage Attributes
- Unsupported Datatypes and Table Storage Attributes
- Supported Databases and Redo Log File Versions
- Why Use the Metadata API?
- Overview of the Metadata API
- Using the Metadata API to Retrieve an Object's Metadata
-
- Typical Steps Used for Basic Metadata Retrieval
- Retrieving Multiple Objects
- Placing Conditions on Transforms
- Accessing Specific Metadata Attributes
- Using the Metadata API to Re-Create a Retrieved Object
- Retrieving Collections of Different Object Types
-
- Filtering the Return of Heterogeneous Object Types
- Performance Tips for the Programmatic Interface of the Metadata API
- Example Usage of the Metadata API
-
- What Does the Metadata API Example Do?
- Output Generated from the GET_PAYROLL_TABLES Procedure
- Summary of DBMS_METADATA Procedures
- What Are the Export and Import Utilities?
- Before Using Export and Import
-
- Running catexp.sql or catalog.sql
- Ensuring Sufficient Disk Space for Export Operations
- Verifying Access Privileges for Export and Import Operations
- Invoking Export and Import
-
- Invoking Export and Import As SYSDBA
- Command-Line Entries
- Parameter Files
- Interactive Mode
-
- Restrictions When Using Export's Interactive Method
- Getting Online Help
- Importing Objects into Your Own Schema
-
- Importing Grants
- Importing Objects into Other Schemas
- Importing System Objects
- Processing Restrictions
- Table Objects: Order of Import
- Importing into Existing Tables
-
- Manually Creating Tables Before Importing Data
- Disabling Referential Constraints
- Manually Ordering the Import
- Effect of Schema and Database Triggers on Import Operations
- Export and Import Modes
-
- Table-Level and Partition-Level Export
-
- Table-Level Export
- Partition-Level Export
- Table-Level and Partition-Level Import
-
- Guidelines for Using Table-Level Import
- Guidelines for Using Partition-Level Import
- Migrating Data Across Partitions and Tables
- Export Parameters
-
- BUFFER
-
- Example: Calculating Buffer Size
- COMPRESS
- CONSISTENT
- CONSTRAINTS
- DIRECT
- FEEDBACK
- FILE
- FILESIZE
- FLASHBACK_SCN
- FLASHBACK_TIME
- FULL
-
- Points to Consider for Full Database Exports and Imports
- GRANTS
- HELP
- INDEXES
- LOG
- OBJECT_CONSISTENT
- OWNER
- PARFILE
- QUERY
-
- Restrictions When Using the QUERY Parameter
- RECORDLENGTH
- RESUMABLE
- RESUMABLE_NAME
- RESUMABLE_TIMEOUT
- ROWS
- STATISTICS
- TABLES
-
- Table Name Restrictions
- TABLESPACES
- TRANSPORT_TABLESPACE
- TRIGGERS
- TTS_FULL_CHECK
- USERID (username/password)
- VOLSIZE
- Import Parameters
-
- BUFFER
- COMMIT
- COMPILE
- CONSTRAINTS
- DATAFILES
- DESTROY
- FEEDBACK
- FILE
- FILESIZE
- FROMUSER
- FULL
- GRANTS
- HELP
- IGNORE
- INDEXES
- INDEXFILE
- LOG
- PARFILE
- RECORDLENGTH
- RESUMABLE
- RESUMABLE_NAME
- RESUMABLE_TIMEOUT
- ROWS
- SHOW
- SKIP_UNUSABLE_INDEXES
- STATISTICS
- STREAMS_CONFIGURATION
- STREAMS_INSTANTIATION
- TABLES
-
- Table Name Restrictions
- TABLESPACES
- TOID_NOVALIDATE
- TOUSER
- TRANSPORT_TABLESPACE
- TTS_OWNERS
- USERID (username/password)
- VOLSIZE
- Example Export Sessions
-
- Example Export Session in Full Database Mode
- Example Export Session in User Mode
- Example Export Sessions in Table Mode
-
- Example 1: DBA Exporting Tables for Two Users
- Example 2: User Exports Tables That He Owns
- Example 3: Using Pattern Matching to Export Various Tables
- Example Export Session Using Partition-Level Export
-
- Example 1: Exporting a Table Without Specifying a Partition
- Example 2: Exporting a Table with a Specified Partition
- Example 3: Exporting a Composite Partition
- Example Import Sessions
-
- Example Import of Selected Tables for a Specific User
- Example Import of Tables Exported by Another User
- Example Import of Tables from One User to Another
- Example Import Session Using Partition-Level Import
-
- Example 1: A Partition-Level Import
- Example 2: A Partition-Level Import of a Composite Partitioned Table
- Example 3: Repartitioning a Table on a Different Column
- Example Import Using Pattern Matching to Import Various Tables
- Using Export and Import to Move a Database Between Platforms
- Warning, Error, and Completion Messages
-
- Log File
- Warning Messages
- Nonrecoverable Error Messages
- Completion Messages
- Exit Codes for Inspection and Display
- Network Considerations
-
- Transporting Export Files Across a Network
- Exporting and Importing with Oracle Net
- Character Set and Globalization Support Considerations
-
- User Data
-
- Effect of Character Set Sorting Order on Conversions
- Data Definition Language (DDL)
- Single-Byte Character Sets and Export and Import
- Multibyte Character Sets and Export and Import
- Materialized Views and Snapshots
-
- Snapshot Log
- Snapshots
-
- Importing a Snapshot
- Importing a Snapshot into a Different Schema
- Transportable Tablespaces
- Read-Only Tablespaces
- Dropping a Tablespace
- Reorganizing Tablespaces
- Support for Fine-Grained Access Control
- Using Instance Affinity with Export and Import
- Reducing Database Fragmentation
- Using Storage Parameters with Export and Import
-
- The OPTIMAL Parameter
- Storage Parameters for OID Indexes and LOB Columns
- Overriding Storage Parameters
- The Export COMPRESS Parameter
- Information Specific to Export
-
- Conventional Path Export Versus Direct Path Export
- Invoking a Direct Path Export
-
- Security Considerations for Direct Path Exports
- Performance Considerations for Direct Path Exports
- Restrictions for Direct Path Exports
- Exporting from a Read-Only Database
- Considerations When Exporting Database Objects
-
- Exporting Sequences
- Exporting LONG and LOB Datatypes
- Exporting Foreign Function Libraries
- Exporting Offline Locally Managed Tablespaces
- Exporting Directory Aliases
- Exporting BFILE Columns and Attributes
- Exporting External Tables
- Exporting Object Type Definitions
- Exporting Nested Tables
- Exporting Advanced Queue (AQ) Tables
- Exporting Synonyms
- Possible Export Errors Related to Java Synonyms
- Information Specific to Import
-
- Error Handling During an Import Operation
-
- Row Errors
- Errors Importing Database Objects
- Controlling Index Creation and Maintenance
-
- Delaying Index Creation
- Index Creation and Maintenance Controls
- Importing Statistics
- Tuning Considerations for Import Operations
-
- Changing System-Level Options
- Changing Initialization Parameters
- Changing Import Options
- Dealing with Large Amounts of LOB Data
- Dealing with Large Amounts of LONG Data
- Considerations When Importing Database Objects
-
- Importing Object Identifiers
- Importing Existing Object Tables and Tables That Contain Object Types
- Importing Nested Tables
- Importing REF Data
- Importing BFILE Columns and Directory Aliases
- Importing Foreign Function Libraries
- Importing Stored Procedures, Functions, and Packages
- Importing Java Objects
- Importing External Tables
- Importing Advanced Queue (AQ) Tables
- Importing LONG Columns
- Importing LOB Columns When Triggers Are Present
- Importing Views
- Importing Partitioned Tables
- Using Export and Import to Partition a Database Migration
-
- Advantages of Partitioning a Migration
- Disadvantages of Partitioning a Migration
- How to Use Export and Import to Partition a Database Migration
- Using Different Releases and Versions of Export
-
- Restrictions When Using Different Releases and Versions of Export and Import
- Examples of Using Different Releases of Export and Import
- Creating Oracle Release 8.0 Export Files from an Oracle9i Database
- About ASMCMD
- About ASM Files, Filenames, Directories, and Aliases
- Running ASMCMD
-
- Preparing to Run ASMCMD
- Running ASMCMD in Interactive Mode
- Running ASMCMD in Noninteractive Mode
- Getting Help
- ASMCMD Command Reference
-
- cd
- du
- exit
- find
- help
- ls
- lsct
- lsdg
- mkalias
- mkdir
- pwd
- rm
- rmalias