Backup: Expdp and impdp complete reference

Different options for expdp/impdp.

As you know expdp is a logical backup not the physical one. It works in same version also you can expdp/impdp from lower to higher version. Oracle is very restricted expdp/impdp from higher to lower version.

There are few important things you should keep in mind.

1. Create directory, if already not, and ensure relevant user got grant to read, write the directory. This directory will be used to store the dump and log file.

CREATE OR REPLACE DIRECTORY DIR_NAME AS ‘/ora01/db/oracle/’;
GRANT READ, WRITE ON DIRECTORY DIR_NAME TO scott;

2. Whenever you need to export online database/schema ensure you will use CONSISTENT=Y (in 11.2). If need you can use SCN or time using option FLASHBACK_TIME.

Check SCN/TIMESTAMP:

OR use below syntax. It will ensure your expdp will be consistent.
Note: It has been used many time successfully.

 
expdp \"/ as sysdba\" FLASHBACK_TIME=\"TO_TIMESTAMP\(TO_CHAR\(SYSDATE,\'YYYY-MM-DD HH24:MI:SS\'\),\'YYYY-MM-DD HH24:MI:SS\'\)\"

1. Estimate the dump size without real export.

 
expdp \'/ as sysdba\' directory=DATAPUMP_DIR full=y ESTIMATE_ONLY=y

2. Export/Import without password.

Note: With Flashback option.

 
expdp \"/ as sysdba\" FLASHBACK_TIME=\"TO_TIMESTAMP\(TO_CHAR\(SYSDATE,\'YYYY-MM-DD HH24:MI:SS\'\),\'YYYY-MM-DD HH24:MI:SS\'\)\" directory=DIR_NAME full=y dumpfile=full_DB.dmp logfile=full_db.log compression=all

Note: Without flashback option

 
expdp \'/ as sysdba\' directory=DATAPUMP_DIR full=y

3. Database full export and import.

 
expdp user/pwd DIRECTORY=DIR_NAME DUMPFILE=full_db.dmp logfile=full_db.log FULL=y

4. Schema/s export and import.

 
expdp user/pwd DIRECTORY=DIR_NAME DUMPFILE=full_db.dmp logfile=full_db.log schemas=SCHEMA1,SCHEMA2

5. Check the progress of export/import.

 
column owner_name format a10
select * from dba_datapump_jobs;

Sample Output:

 
set linesize 320
column username format a10
select t.username,
t.START_TIME, t.LAST_UPDATE_TIME 'YYYY-MM-DD HH:MM:SS', t.TIME_REMAINING,t.ELAPSED_SECONDS,
t.opname,t.sofar,t.totalwork from V$SESSION_LONGOPS t where/* t.USERNAME = 'user_name' and*/ t.TARGET_DESC = 'EXPORT' ;

Sample Output:

6. Transportable tablespace export and import.

Notes:
1. You must make the tablespace/s read-only before export.
2. You should ‘TRANSPORT_FULL_CHECK=Y’ to ensure tablespaces are self-contained. Means there is no other tablespace holding dependant object/s.

 
ALTER TABLESPACE tblspace1 READ ONLY ;
ALTER TABLESPACE tblspace2 READ ONLY ;
EXPDP system/password DIRECTORY =DIR_NAME DUMPFILE=trns_tbl.dmp LOGFILE=trns_tbl.log TRANSPORT_TABLESPACES=tblspace1,tblspace2 TRANSPORT_FULL_CHECK=Y
ALTER TABLESPACE tblspace1 READ WRITE ;
ALTER TABLESPACE tblspace2 READ WRITE ;

7. Table export and import.

Export:

 
expdp scott/tiger tables=EMP,DEPT directory=DIR_NAME dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

Import:

 
impdp scott/tiger tables=EMP,DEPT directory=DIR_NAME dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

8. Parallel export and import to improve the performance.

Export:

 
expdp scott/tiger directory=DIR_NAME parallel=4 dumpfile=fulldb_%U.dmp logfile=exfulldb.log full=y

Import:

 
impdp scott/tiger directory=DIR_NAME parallel=4 dumpfile=fulldb_%U.dmp logfile=imfulldb.log

9. EXCLUDE tables and schemas from a full database export/import.

Notes:
1. You need not prefix the owner name. If you put the OWNER.TABLE_NAME it would not work.
2. It will EXCLUDE all tables mentioned in the list even if more than one owner has the same object name.
For example: If TABLE_NAME1 table is owned by USER1 and USER2 its table will be EXCLUDED from both.
3. Below command will work only in the parameter file, not in the command line.
expdp_exclude_tables_n_users.par

 
DIRECTORY=DIR_NAME
DUMPFILE=dump_name.dmp
LOGFILE=log_file_name.log
FULL=Y
EXCLUDE=STATISTICS
EXCLUDE=TABLE:"IN ('TABLE_NAME1','TABLE_NAME3','TABLE_NAME3')"
EXCLUDE=SCHEMA:"IN ('SCHEMA1', 'SCHEMA2')"
-- END expdp_exclude_tables_n_users.par

Command line syntax of above par

 
expdp system/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=abc.dmp FULL=Y
EXCLUDE=TABLE:\"IN \(\'TABLE_NAME1\', \'TABLE_NAME2\' , \'TABLE_NAME3\')\"
EXCLUDE=SCHEMA:\"IN \(\'SCHEMA1\', \'SCHEMA2\'\)\"

Backup: Expdp/impdp database full export without password

expdp
Note: It will create a consistent dumpfile, which can be used to import full database, schema etc.

 
expdp \"/ as sysdba\" FLASHBACK_TIME=\"TO_TIMESTAMP\(TO_CHAR\(SYSDATE,\'YYYY-MM-DD HH24:MI:SS\'\),\'YYYY-MM-DD HH24:MI:SS\'\)\" directory=DIR_NAME full=y dumpfile=full_DB.dmp logfile=full_db.log compression=all

impdp
Note: While importing you need not to mention the flashback option.

 
impdp \'/ as sysdba\' directory=DATAPUMP_DIR dumpfile=full_DB.dmp logfile=impfull_db.log full=y

Backup: Expdp/impdp directory, SCN, timestamp and consistent

Expdp/impdp directory, SCN, timestamp and consistent

As you know expdp is a logical backup not the physical one. It works in same version also you can expdp/impdp from lower to higher version. Oracle is very restricted expdp/impdp from higher to lower version.

There are few important things you should keep in mind.

1. Create directory, if already not, and ensure relevant user got grant to read, write the directory. This directory will be used to store the dump and log file.

 
CREATE OR REPLACE DIRECTORY DIR_NAME AS ‘/ora01/db/oracle/’;
GRANT READ, WRITE,EXECUTE ON DIRECTORY DIR_NAME TO scott;

2. Whenever you need to export online database/schema ensure you will use CONSISTENT=Y (in 11.2). If need you can use SCN or time using option FLASHBACK_TIME.

Check SCN/TIMESTAMP:

OR use below syntax. It will ensure your expdp will be consistent.
Note: It has been used many time successfully.

 
expdp \"/ as sysdba\" FLASHBACK_TIME=\"TO_TIMESTAMP\(TO_CHAR\(SYSDATE,\'YYYY-MM-DD HH24:MI:SS\'\),\'YYYY-MM-DD HH24:MI:SS\'\)\"

DBAD: Script to drop everything from a schema

Warning !!! BE CAREFUL! – THIS DROPS EVERYTHING UNDER THE LOGON/OWNER THAT RUNS IT

 Akismet Anti-Spam
Activate | Delete
Used by millions, Akismet is quite possibly the best way in the world to protect your blog from spam. It keeps your site protected even while you sleep. To get started: activate the Akismet plugin and then go to your Akismet Settings page to set up your API key.
Version 4.1.1 | By Automattic | View details
Select All In One SEO Pack
All In One SEO Pack
Activate | Delete
Out-of-the-box SEO for your WordPress blog. Features like XML Sitemaps, SEO for custom post types, SEO for blogs or business sites, SEO for ecommerce sites, and much more. More than 50 million downloads since 2007.
Version 2.12 | By Michael Torbert | View details
Select All-in-One WP Migration
All-in-One WP Migration
Activate | Delete
Migration tool for all your blog data. Import or Export your blog content with a single click.
Version 6.91 | By ServMask | View details
Select BackUp WD
BackUp WD
Activate | Delete
Backup WD is an easy-to-use, fully functional backup plugin that allows to backup your website.
SET SERVEROUTPUT ON SIZE 1000000
BEGIN
FOR cur_rec IN (SELECT object_name, object_type
FROM user_objects
WHERE object_type IN ('TABLE','SYNONYM','VIEW', 'PACKAGE', 'PROCEDURE', 'FUNCTION', 'SEQUENCE', 'TYPE')) LOOP
BEGIN
IF cur_rec.object_type = 'TABLE' THEN
EXECUTE IMMEDIATE 'DROP ' || cur_rec.object_type || ' "' || cur_rec.object_name || '" CASCADE CONSTRAINTS';
ELSE
EXECUTE IMMEDIATE 'DROP ' || cur_rec.object_type || ' "' || cur_rec.object_name || '"';
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line('FAILED: DROP ' || cur_rec.object_type || ' "' || cur_rec.object_name || '"');
END;
END LOOP;
END;
/
 
PURGE RECYCLEBIN;
SET ERRORLOGGING OFF

In-built scripts with the database.

$ORACLE_HOME/rdbms/admin

Script NameDescription
addmrpt.sqlAutomatic Database Diagnostic Monitor (ADDM) report
addmrpti.sqlAutomatic Database Diagnostic Monitor (ADDM) report
addmtmig.sqlPost upgrade script to load new ADDM task metadata tables for task migration.
ashrpt.sqlActive Session History (ASH) report
ashrpti.sqlActive Session History (ASH) report. RAC and Standby Database support added in 2008.
ashrptistd.sqlActive Session History (ASH) report helper script for obtaining user input when run on a Standby.
awrblmig.sqlAWR Baseline Migration
awrddinp.sqlRetrieves dbid,eid,filename for SWRF and ADDM Reports
awrddrpi.sqlWorkload Repository Compare Periods Report
awrddrpt.sqlProduces Workload Repository Compare Periods Report
awrextr.sqlHelps users extract data from the AWR
awrgdinp.sqlAWR global compare periods report input variables
awrgdrpi.sqlWorkload repository global compare periods report
awrgdrpt.sqlAWR global differences report
awrginp.sqlAWR global input
awrinfo.sqlOutputs general Automatic Workload Repository (AWR) information such as the size and data distribution
awrinput.sqlCommon code used for SWRF reports and ADDM
awrload.sqlUses DataPump to load information from dump files into the AWR
awrrpt.sqlAutomated Workload Repository (AWR) report
awrrpti.sqlAutomated Workload Repository (AWR) report
awrsqrpi.sqlReports on differences in values recorded in two different snapshots
awrsqrpt.sqlProduces a workload report on a specific SQL statement
catalog.sqlBuilds the data dictionary views
catblock.sqlCreates views that dynamically display lock dependency graphs
catclust.sqlBuilds DBMS_CLUSTDB built-in package
caths.sqlInstalls packages for administering heterogeneous services
catio.sqlAllows I/O to be traced on a table-by-table basis
catnoawr.sqlScript to uninstall AWR features
catplan.sqlBuilds PLAN_TABLE$: A public global temporary table version of PLAN_TABLE.
dbfs_create_filesystem.sqlDBFS create file system script
dbfs_create_filesystem_advanced.sqlDBFS create file system script
dbfs_drop_filesystem.sqlDBFS drop file system
dbmshptab.sqlPermanent structures supporting DBMS_HPROF hierarchical profiler
dbmsiotc.sqlAnalyzes chained rows in index-organized tables.
dbmspool.sqlEnables DBA to lock PL/SQL packages, SQL statements, and triggers into the shared pool.
dumpdian.sqlAllows one to dump Diana out of a database in a human-readable format (exec dumpdiana.dump(‘DMMOD_LIB’);)
epgstat.sqlShows various status of the embedded PL/SQL gateway and the XDB HTTP listener. It should be run by a user with XDBADMIN and DBA roles.
hangdiag.sqlHang analysis/diagnosis script
prgrmanc.sqlPurges from RMAN Recovery Catalog the records marked as deleted by the user
recover.bsqCreates recovery manager tables and views … read the header very carefully if considering performing an edit
sbdrop.sqlSQL*PLUS command file drop user and tables for readable standby
sbduser.sqlSQL*Plus command file to DROP user which contains the standby statspack database objects
sbreport.sqlThis script calls sbrepins.sql to produce standby statspack report. It must run as the standby statspack owner, stdbyperf
scott.sqlCreates the SCOTT schema objects and loads the data
secconf.sqlSecure configuration script: Laughable but better than the default
spauto.sqlSQL*PLUS command file to automate the collection of STATPACK statistics
spawrrac.sqlGenerates a global AWR report to report performance statistics on all nodes of a cluster
spcreate.sqlCreates the STATSPACK user, table, and package
sppurge.sqlPurges a range of STATSPACK data
sprepcon.sqlSTATSPACK report configuration.
sprepsql.sqlDefaults the dbid and instance number to the current instance connected-to, then calls sprsqins.sql to produce the standard Statspack SQL report
sprsqins.sqlSTATSPACK report.
sql.bsqDrives the creation of the Oracle catalog and data dictionary objects.
tracetab.bsqCreates tracing table for the DBMS_TRACE built-in package.
userlock.sqlRoutines that allow the user to request, convert and release locks.
utlchain.sqlCreates the default table for storing the output of the analyze list chained rows command.
utlchn1.sqlCreates the default table for storing the output of the analyze list chained rows command.
utlconst.sqlConstraint check utility to check for valid date constraints.
utldim.sqlBuild the Exception table for DBMS_DIMENSION.VALIDATE_DIMENSION.
utldtchk.sqlThis utility script verifies that a valid database object has correct dependency$ timestamps for all its parent objects. Violation of this invariant can show up as one of the following:Invalid dependency references [DEP/INV] in library cache dumps ORA-06508: PL/SQL: could not find program unit being calledPLS-00907: cannot load library unit %s (referenced by %s)ORA-00600[ kksfbc-reparse-infinite-loop]
utldtree.sqlShows objects recursively dependent on given object.
utledtol.sqlCreates the outline table OL$, OL$HINTS, and OL$NODES in a user schema for working with stored outlines
utlexcpt.sqlBuilds the Exception table for constraint violations.
utlexpt1.sqlCreates the default table (EXCEPTIONS) for storing exceptions from enabling constraints. Can handle both physical and logical rowids.
utlip.sqlCan be used to invalidate all existing PL/SQL modules (procedures, functions, packages, types, triggers, views) in a database so that they will be forced to be recompiled later either automatically or deliberately.
utllockt.sqlPrints the sessions in the system that are waiting for locks, and the locks they are waiting for.
utlpwdmg.sqlCreates the default Profile password VERIFY_FUNCTION.
utlrdt.sqlrecompiles all DDL triggers in UPGRADE mode at the end of one of three operations:DB upgradeutlirp to invalidate and recompile all PL/SQLdbmsupgnv/dbmsupgin to convert PL/SQL to native/interpreted
utlrp.sqlRecompiles all invalid objects in the database.
utlscln.sqlCopies a snapshot schema from another snapshot site
utlsxszd.sqlCalculates the required size for the SYSAUX tablespace.
utltkprf.sqlGrants public access to all views used by TKPROF with verbose=y option
utluiobj.sqlOutputs the difference between invalid objects post-upgrade and those objects that were invalid preupgrade
utlu112i.sqlProvides information about databases prior to upgrade (Supported releases: 9.2.0, 10.1.0 and 10.2.0)
utlvalid.sqlCreates the default table for storing the output of the analyze validate command on a partitioned table
utlxaa.sqlDefines a user-defined workload table for SQL Access Advisor. The table is used as workload source for SQL Access Advisor where a
user can insert SQL statements and then specify the table as a workload source.
utlxmv.sqlCreates the MV_CAPABILITIES_TABLE for DBMS_MVIEW.EXPLAIN_MVIEW.
utlxplan.sqlBuilds PLAN_TABLE: Required for Explain Plan, DBMS_XPLAN, and AUTOTRACE (replaced by catplan.sql)
utlxplp.sqlDisplays Explain Plan from PLAN_TABLE using DBMS_XPLAN built-in. Includes parallel run information
utlxpls.sqlDisplays Explain Plan from PLAN_TABLE using DBMS_XPLAN built-in. Does not include parallel query information
utlxrw.sqlBuilds the REWRITE_TABLE for EXPLAIN_REWRITE tests
xdbvlo.sqlValidates XML DB schema objects
$ORACLE_HOME/sqlplus/admin
glogin.sqlSQL*Plus global login “site profile” file. Add SQL*Plus commands here to executed them when en a user starts SQL*Plus and/or connects
plustrce.sqlCreates the PLUSTRACE role required to use AUTOTRACE
pupbld.sqlCreates PRODUCT_USER_PROFILE
$ORACLE_HOME/sysman/admin/scripts/db
dfltAccPwd.sqlChecks for default accounts with default passwords
hanganalyze.sqlHang analysis script for stand-alone databases
hanganalyzerac.sqlHang analysis script for RAC clusters

DB Admin: Used, auto and max space of datafile/s of the given tablespace in Mb


Given tablespace used, auto and max in Mb

 
set lines 200
set pagesize 150
break on report
compute sum of mbytes on report
col file_name format a70
select
file_id,
file_name,
bytes/1024/1024 CURRENT_USED_MB,
autoextensible,
maxbytes/1024/1024 CAN_GROW_TO_MAX_MB
from
dba_data_files
where
tablespace_name=upper('&Tbs_Name');

Sample output:

DB Admin: Used, auto and max space of datafile/s of given tablespace in Gb

Given tablespace used, auto and max in GB

set lines 200
set pagesize 150
break on report
compute sum of mbytes on report
col file_name format a70
select
file_id,
file_name,
bytes/1024/1024/1024 CURRENT_USED_GB,
autoextensible,
maxbytes/1024/1024/1024 CAN_GROW_TO_MAX_GB
from
dba_data_files
where
tablespace_name=upper('&Tbs_Name');

Sample output:

Tablespace usage history

Tablespace usage history of 7 days.

select thedate,
gbsize,
prev_gbsize,
gbsize-prev_gbsize diff
from (
select thedate,
gbsize,
lag(gbsize,1) over (order by r) prev_gbsize
from (
select rownum r,
thedate,
gbsize
from (
select trunc(thedate) thedate,
max(gbsize) gbsize
from (
select to_date(to_char(snapshot.begin_interval_time,'YYYY-MON-DD HH24:MI:SS'),'YYYY-MON-DD HH24:MI:SS') thedate,
round((usage.tablespace_usedsize*block_size.value)/1024/1024/1024,2) gbsize
from dba_hist_tbspc_space_usage usage,
v$tablespace tablespace,
dba_hist_snapshot snapshot,
v$parameter block_size
where usage.snap_id = snapshot.snap_id
and usage.tablespace_id = tablespace.ts#
and tablespace.name = '&tablespace'
and block_size.name = 'db_block_size'
)
group by
trunc(thedate)
order by
trunc(thedate)
)
)
);

Enter value for Tablespace: TEMP

Sample Output:

THEDATE       GBSIZE PREV_GBSIZE       DIFF
——— ———- ———– ———-
15-OCT-15      13.04
16-OCT-15      11.21       13.04      -1.83
17-OCT-15        .91       11.21      -10.3
18-OCT-15          0         .91       -.91
19-OCT-15       6.44           0       6.44
20-OCT-15       4.45        6.44      -1.99
21-OCT-15      15.89        4.45      11.44
22-OCT-15       3.88       15.89     -12.01

8 rows selected.