In this post we will see how we can use export-import utility when container database involved.
In our test case we have PROD and TEST server. Both database version is 19c, operating system is OEL 7.9
A. First steps is preparation for export process on Prod and Test database.
- Check schema size which will be export from primary system
SELECT (Sum(bytes)/1024/1024) AS total_size_MB FROM dba_segments WHERE owner =’X’;
–500 GB
- For both system check directories path ve disk size.
— on PROD database
SQL> alter session set container=PROD;
SQL> SELECT * FROM dba_directories;
SQL> create directory EXP_DIR as ‘/orabck/export/’;
SQL> grant exp_full_database to helios;
SQL> grant read , write on directory EXP_DIR to helios;
–on TEST database
SQL> alter session set container=TEST;
SQL> SELECT * FROM dba_directories;
SQL> create directory EXP_DIR as ‘/orabck/export/’;
SQL> grant exp_full_database to helios;
SQL> grant read , write on directory EXP_DIR to helios;
- You may need to change TEST user’s password before drop it.
select dbms_metadata.get_ddl(‘USER’,’PPC_WEB’) from dual;
–User X on Test database
ALTER USER “X” IDENTIFIED BY VALUES ‘S:F1B891902ADD0F179EB37EBBBD75B548E81064D4691120D214A9A810B810BC8751810A3EAF5B2A930E77FD450984A9B1EAAB58F3136918B194C’
- Tablespace check on both database.
select distinct(TABLESPACE_NAME) from dba_extents where owner=’X’;
B. EXPORT Process Steps
- Check CPU count if you will use PARALLEL options.
–CPU(s): 12
- Start Export process on Prod database.
–on prod
nohup expdp helios/xxx@PROD SCHEMAS=X DIRECTORY=EXP_DIR DUMPFILE=X_30032022_%U.dmp PARALLEL=6 LOGFILE=X_30032022.log compression=all EXCLUDE=STATISTICS &
— Control expdp session
SELECT B.USERNAME, A.SID, B.OPNAME, B.TARGET,
ROUND(B.SOFAR*100/B.TOTALWORK,0) || ‘%’ AS “%DONE”, B.TIME_REMAINING,
TO_CHAR(B.START_TIME,’YYYY/MM/DD HH24:MI:SS’) START_TIME
FROM V$SESSION_LONGOPS B, V$SESSION A
WHERE A.SID = B.SID
AND B.OPNAME LIKE ‘%EXPORT%’
ORDER BY 6;
- Start copy process to test server
nohup scp /orabck/export/*.dmp oracle@TEST_SERVER:/yedek/export &
C. On Test server let us start imp process.
- Before drop user(if you need) take schema script
- Kill user session
select ‘ALTER SYSTEM KILL SESSION ”’ || SID || ‘,’ || SERIAL# ||’,’|| ‘@’ ||inst_id|| ”’ immediate;’ from gv$session where schemaname=’X’;
- Lock User
ALTER USER X ACCOUNT LOCK;
- Drop user ==> on test server!!!
DROP USER X CASCADE;
D. Start IMPORT
- Check test server CPU count
–CPU(s): 8
- Start Import
–X
export ORACLE_PDB_SID=TEST <<< YOU NEED TO SET IT!
impdp “‘/ as sysdba'” directory=EXP_DIR dumpfile=X_30032022_%U.dmp parallel=4 logfile=imp_X_30032022.log
- Check import process SELECT B.USERNAME, A.SID, B.OPNAME, B.TARGET,
ROUND(B.SOFAR*100/B.TOTALWORK,0) || ‘%’ AS “%DONE”, B.TIME_REMAINING,
TO_CHAR(B.START_TIME,’YYYY/MM/DD HH24:MI:SS’) START_TIME
FROM V$SESSION_LONGOPS B, V$SESSION A
WHERE A.SID = B.SID
AND B.OPNAME LIKE ‘%IMPORT%’
ORDER BY 6;
E. Start start for schema after import –> Its recommended
exec dbms_stats.gather_schema_stats(‘X’, cascade=>TRUE,degree=>4);