Showing posts with label Datapump/Export/Import. Show all posts
Showing posts with label Datapump/Export/Import. Show all posts

Wednesday, May 23, 2018

stream_pool_size

check the spilling status and pool size recommendation of your Streams pool with following query.


select inst_id,STREAMS_POOL_SIZE_FOR_ESTIMATE,estd_spill_time, estd_unspill_time, streams_pool_size_factor

from gv$streams_pool_advice order by 1,5;


Saturday, September 27, 2014

expdp flashback_time tips

userid="sys/don as sysdba"
job_name=job1
# parallel=2
Flashback_time at specific time
FLASHBACK_TIME="TO_TIMESTAMP('13-05-2010 13:20:00', 'DD-MM-YYYY HH24:MI:SS')"

Flashback_time at current time
FLASHBACK_TIME="TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')"

Flashback_time at current time
flashback_time=systimestamp

Flashback_time at daily time example 11PM
flashback_time="to_timestamp(to_char(trunc(sysdate)+22/24,'YYYY-MM-DD'),'YYYY-MM-DD')"

directory=dpdump
filesize=20000000000
EXCLUDE=statistics
SCHEMAS=SCOTT
dumpfile=expdp_demo_scott_%U.dmp
logfile=expdp_demo_scott.log
COMPRESSION=ALL
CLUSTER=N
CONTENT=METADATA_ONLY

Saturday, August 23, 2014

expdp full database export / Shell Script

expdp_full_db.ksh

NOTE : change DP1, ORACLE_HOME, DIRECTORY, MAIL and  adjust days before removing dump

#!/bin/ksh
export ORACLE_SID=$1
export ORACLE_HOME=/ora01/oracle/product/11.2.0.4/db
export PATH=$ORACLE_HOME/bin:$PATH
export EXPDATE=`date "+%m%d_%H%M"`
export DP1=/oramisc01/oracle/export/${ORACLE_SID}

$ORACLE_HOME/bin/expdp \'/ as sysdba\'  job_name=job1 directory=dpdump dumpfile=expdp_$ORACLE_SID}_full_${EXPDATE}.dmp full=y flashback_time=\"TO_TIMESTAMP\(TO_CHAR\(SYSDATE,\'YYYY-MM-DD HH24:MI:SS\'\),\'YYYY-MM-DD HH24:MI:SS\'\)\" logfile=expdp_{ORACLE_SID}_full_${EXPDATE}.log statistics=none COMPRESSION=ALL

CLUSTER=N  filesize=20000000000 parallel=1

/bin/gzip ${DP1}/expdp_$ORACLE_SID}_full_${EXPDATE}.dmp


find ${DP1} -name exp_$ORACLE_SID}\* -mtime +7 -exec rm {} \;


tail -15 ${DP1}/expdp_$ORACLE_SID}_full_${EXPDATE}.log | mailx -s "$ORACLE_SID} database export output" abx@gmail.com


Tuesday, April 19, 2011

DATAPUMP With PARALLEL > 1 ON 11.2 RAC



Symptoms
Datapump on 11.2 RAC with PARALLEL > 1  hits the following errors


ORA-31693: Table data object "SCOTT"."ESCSTATUS" failed to load/unload and is being skipped do error:
ORA-31617: unable to open dump file "/ora01/oracle/admin/demo/dpdump/demo_expdp_SCOTT.041911_06_02.dmp" for write
ORA-19505: failed to identify file "/ora01/oracle/admin/demo/dpdump/demo_expdp_SCOTT.041911_16_02.dmp"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Cause
From 11.2, Datapump new parameter CLUSTER is introduced.

CLUSTER : Default=Y

Purpose :
Determines whether Data Pump can use Oracle Real Application Clusters (RAC)
resources and start workers on other Oracle RAC instances.

Syntax and Description : CLUSTER=[Y | N]


Solution
To force Data Pump to use only the instance where the job is started and to replicate pre-Oracle Database 11g release 2 (11.2) behavior, specify CLUSTER=N.

Example:
$ expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=N PARALLEL=3

Hope this help! Rupam



Tuesday, October 05, 2010

Datapump Orphaned Job

Datapump Orphaned Job

Follow the steps to cleanup Datapump orphaned job


DataPump

DataPump  

Setup

Connect to instance as sysdba using sqlplus
grant create any directory to scott;
create or replace directory dpdump as '/u01/app/oracle/export/';
grant read, write on directory dpdump to scott;


Check the directories

sql>  select * from dba_directories;

Schema Exports/Imports

The OWNER parameter of exp has been replaced by the SCHEMAS parameter.

Export

expdp scott/tiger@demo parfile=expdp.par


script : expdp.par

job_name=job1
directory=dpdump
filesize=20000000000
schemas=scott
dumpfile=expdp_demo_scott.dmp
logfile=expdp_demo_scott.log
FLASHBACK_TIME="TO_TIMESTAMP('05-10-2010 10:30:00', 'DD-MM-YYYY HH24:MI:SS')"

Import

impdp scott/tiger@demo parfile=impdp.par

script : impdp.par
directory=DPDUMP
job_name=job1
EXCLUDE=statistics
schemas=SCOTT
dumpfile=expdp_demo_scott.dmp
logfile=impdp_demo_scott.log

Monitoring

Sql> select count(*) from scott.job1;   #

Monitor export Using expdp

expdp scott/tiger@demo attach=job1

Hope this help. Regards Rupam

expdp fails with ORA-31626, ORA-31633, ORA-06512, ORA-06512 and ORA-00955

expdp fails with ORA-31626, ORA-31633, ORA-06512, ORA-06512 and ORA-00955

While exporting data using expdp, got following error

Error

ORA-31626: job does not exist
ORA-31633: unable to create master table "SYS.JOB1"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-00955: name is already used by an existing object

Cause

There is already as job with same  name in the master table

Solution

Option 1: Change the job name
Option 2: Clean up Orphaned job. Read More

Monday, October 04, 2010

Refresh schema

Refresh schema

Summary

Refresh test database schema(scott) with production database schema(scott).

  1. export production database schema(scott) using datapump
  2. copy dump file to test server
  3. generate script to drop objects from test database
  4. drop objects from test schema(scott) and purge from recyclebin
  5. import data into test database schema(scott)
  6. recompile invalid objects
  7. match the objects count between production and test schema
  8. collect statistics in test database schema (scott)
  9. open for business

Thursday, May 24, 2007

Datapump Sample Scripts

# Prepare for datapump
sqlplus / as sysdba
grant create any directory to scott;
create or replace directory dumpdir as '/ora01/oracle/dpdump/';
grant read,write on directory dumpdir to scott;

# export schema
expdp scott/manager schemas=scott dumpfile=t1 logfile=t1.log directory=dumpdir job_name=dpjob1

# impot schema
impdp scott/manager schemas=scott dumpfile=t1 logfile=t1.log directory=dumpdir job_name=dpjob1

# tables
expdp scott/manager tables=t1 dumpfile=t1 logfile=t1.log directory=dumpdir job_name=dpjob1

--
# export schema metadata only
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
expdp system dumpfile=demod_dmp_dir:ACT_OWNCStructure.dmp logfile=demod_dmp_dir:ACT_OWNCStructure.log \
schemas=act_ownc content=METADATA_ONLY
date


# export schema data only
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
expdp system dumpfile=demod_dmp_dir:demod_ACT_OWNC.dmp logfile=demod_dmp_dir:demod_ACT_OWNC.log \
schemas=act_ownc content=DATA_ONLY
date

# export schema data estimate_only
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
expdp system estimate_only=y logfile=demod_dmp_dir:demod_ACT_OWNC_est.log \
schemas=act_ownc content=DATA_ONLY
date

# export data + metadata
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
expdp act_ownb dumpfile=demod_dmp_dir:ACT_OWNBSelected2.dmp logfile=demod_dmp_dir:ACT_OWNBSelectedExport2.log \
tables=\
ACT_OWN_STATS,\
ALIAS,\
ATTRIBUTE
date

# export tables data only
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
expdp act_ownc dumpfile=demod_dmp_dir:demod_ACT_OWNCp1.dmp logfile=demod_dmp_dir:demod_ACT_OWNCp1.log \
content=DATA_ONLY tables=\
TRAFFIC_UNCOMPRESSED_BASE,\
TRAFFIC_UNCOMPRESSED_FORECAST,\
TRAFFIC_UNCOMPRESSED_SP1,\
TRAFFIC_UNCOMPRESSED_SP3,\
TRAFFIC_UNCOMPRESSED_SP2
date


# import data ; truncate existing table before import
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
impdp system/eaxo042 dumpfile=demod_dmp_dir:ACT_OWNBSelected.dmp logfile=demod_dmp_dir:actowncimp.log remap_schema=act_ownb:act_ownb \
table_exists_action=truncate
date

# import data Only; truncate existing table before import
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
impdp system dumpfile=demod_dmp_dir:ACT_OWNDSPtables.dmp logfile=demod_dmp_dir:actowncimpsp.log \
remap_schema=act_ownd:act_ownc content=data_only \
table_exists_action=truncate
date


# import data append
date
ORACLE_SID=demod1;ORAENV_ASK=NO;. oraenv;ORAENV_ASK=YES
impdp system dumpfile=demod_dmp_dir:ACT_OWNB_mar13.dmp logfile=demod_dmp_dir:actownbimpmar13.log remap_schema=act_ownb:act_ownb \
table_exists_action=append
date

Thursday, May 03, 2007

export and import using pipe

# export and compress using pipe

export EXPDATE=`date "+%m%d_%H%M"`
export DUMPFILE=demo_user_${EXPDATE}.dmp
export EXPLOGFILE=demo_user_${EXPDATE}.log


/usr/sbin/mknod ${DUMPFILE} p
/usr/bin/gzip < ${DUMPFILE} > ${DUMPFILE}.gz &
sleep 30

exp scott/tiger file=${DUMPFILE} log=${EXPLOGFILE} \
owner=SYSADM consistent=y buffer=10000000 direct=Y statistics=none

tail -5 demo_user_${EXPDATE}.log | mailx -s "demo export" me@google.com

# import from compresed file using pipe

export DUMPFILE=myexp.dmp
export IMPLOGFILE=imp_user_${EXPDATE}.log

/usr/sbin/mknod ${DUMPFILE} p

imp scott/tiger file=${DUMPFILE} log=${IMPLOGFILE} \
fromuser=SYSADM touser=SYSADM indexes=y ignore=y buffer=10000000 commit=y &

/usr/bin/gzip -d < ${DUMPFILE}.gz > ${DUMPFILE}
sleep 300

tail -5 demo_imp_user_${EXPDATE}.log | mailx -s "demo export" me@google.com

export full, zip it and remove files other than 5 days - all in one

#!/bin/ksh
export ORACLE_SID=demo
export ORACLE_HOME=/ora01/oracle/product/db_1020
export PATH=$ORACLE_HOME/bin:$PATH
export EXPDATE=`date "+%m%d_%H%M"`

$ORACLE_HOME/bin/exp \'/ as sysdba\' file=demo_full_${EXPDATE}.dmp full=y statistics=none log=demo_full_${EXPDATE}.log

/bin/gzip demo_full_${EXPDATE}.dmp

find -name exp_demo\* -mtime +5 -exec rm {} \;

tail -5 demo_full_${EXPDATE}.log | mailx -s "demo export" me@google.com

Tuesday, April 24, 2007

Query Import Status

col table_name format a30
select substr(sql_text,instr(sql_text,'INTO "'),30) table_name,
rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
and open_versions > 0;

export/import using pipe at the same time


# Reference: Oracle Metalink Note 1018477.6 ‘Exporting on Unix Systems’
# example for demo 9.2.0.4 database on apollo (export consists of MMADMIN schema-
# you can also perform a full exp/imp to import all users to the target database):
#


1. Setup pipe and perform export from source

# example for demo 9.2.0.4 database on apollo

oracle@apollo> mknod /ora01/oracle/admin/demo/exp/demo.dmp p
oracle@apollo> exp system file=/ora01/oracle/admin/demo/exp/demo.dmp \
> owner=MMADMIN consistent=y statistics=none log=demo.log
Export: Release 9.2.0.4.0 - Production on Fri Feb 17 16:22:47 2006
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Password:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.4.0 - Production
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user MMADMIN
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user MMADMIN
About to export MMADMIN's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export MMADMIN's tables via Conventional Path ...
. . exporting table MMACCESS 2 rows exported
. . exporting table MMACCESS_BK 5 rows exported
. . exporting table MMACTION 12 rows exported
. . exporting table MMACTIONGROUPING 17 rows exported . . . .




2. Setup pipe and perform import on target

# example for demo 10.2.0.1 database on atlantis

oracle@atlantis> mknod /ora01/oracle/admin/demo/exp/demo2.dmp p
oracle@atlantis> ssh apollo dd if=/ora01/oracle/admin/demo/exp/demo.dmp \
> > /ora01/oracle/admin/demo/exp/demo2.dmp &
[1] 8761
oracle@atlantis> imp system file=/ora01/oracle/admin/demo/exp/demo2.dmp \
> fromuser=MMADMIN touser=MMADMIN buffer=10000000 commit=y log=demo.log
Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing MMADMIN's objects into MMADMIN
. . importing table "MMACCESS" 2 rows imported
. . importing table "MMACCESS_BK" 5 rows imported
. . importing table "MMACTION" 12 rows imported
. . importing table "MMACTIONGROUPING" 17 rows imported
. . importing table "MMAPPLICATION" 5 rows imported
. . importing table "MMCLASS" 202 rows imported
. . importing table "MMCLASSMERGEOPTION" 0 rows imported
. . importing table "MMCONTROL" 5 rows imported . . . .