ORA-2020 – Too many database links in use

This is another one of those seldom-seen errors that you may encounter if you are in this business long enough. I was testing some database links for a database upgrade and saw this error during the test. A little background first. The database to be upgraded was Oracle version 8.1.6 (pause for dramatic affect) and the project manager wanted to upgrade it to the latest 12c version as there was little in the way of complicated data. I reverse-engineered the database links and placed them in a test 12c database to see if they could still see their targets. I did a basic ‘select * from dual@link’ for each. I saw the ORA- error during the sequential execution of these statements:

SQL> select * from dual@link1
*
SQL>
D
-
X

SQL> select * from dual@link2
*
ERROR at line 1:
ORA-02020: too many database links in use

SQL> select * from dual@link3
*
ERROR at line 1:
ORA-02020: too many database links in use

The problem is that Oracle keeps each link called open until the session that called it ends. The number of links a single session can have open before this error is seen is directly related to the init parameter ‘open_links’, which I then checked by querying v$parameter:

SQL> select value

2 from v$parameter

3 where name = 'open_links';

VALUE

--------------------------------------------------------------------------------

4

This explained why I started getting the error after querying four database links. What can I do if I need to test more than four links at a time? Log out and in again after every four links? That could get a bit tedious. Fortunately, there a couple of workarounds. The simplest one is to perform a commit after each link or every four links. This works for my simple testing, but a more programmatic approach would be to use this code:

DBMS_SESSION.CLOSE_DATABASE_LINK ('DB LINK NAME')

You can even encapsulate this command in a loop when selecting active links from v$dblink to close all existing open links in a manner similar to this:

create or replace procedure close_db_links
  authid current_user is
begin
  for lnk in (select db_link from v$dblink) 
  loop
    dbms_session.close_database_link(lnk.db_link);
  end loop;
end;
/
Advertisements

SQL Server recovery steps with progress check

———————————— Useful TSQL recovery commands ——————————————-

–Listing the files and paths in a SQL Server backup file. Useful for recovering with the MOVE command

RESTORE FILELISTONLY
FROM DISK = N’\\external_storage\backup_1_20150209060001.BAK’
GO

–Listing information from a SQL Server backup file. Designate the first file of the backup set only.

RESTORE HEADERONLY
FROM DISK = N’\\external_storage\backup_1_20150209060001.BAK’
GO

–Recovering a database while moving recovered files to different locations

declare @backupfilename1 nvarchar(2000)
declare @backupfilename2 nvarchar(2000)
declare @backupfilename3 nvarchar(2000)
declare @backupfilename4 nvarchar(2000)
declare @backupfilename5 nvarchar(2000)
declare @backupfilename6 nvarchar(2000)
set @backupfilename1 = N’\\external_storage\backup1.BAK’
set @backupfilename2 = N’\\external_storage\backup2.BAK’
set @backupfilename3 = N’\\external_storage\backup3.BAK’
set @backupfilename4 = N’\\external_storage\backup4.BAK’
set @backupfilename5 = N’\\external_storage\backup5.BAK’
set @backupfilename6 = N’\\external_storage\backup6.BAK’
restore Database[PGAS_TEST] FROM
DISK =@backupfilename1,
DISK =@backupfilename2,
DISK =@backupfilename3,
DISK =@backupfilename4,
DISK =@backupfilename5,
DISK =@backupfilename6
WITH REPLACE,
MOVE ‘backup1’ TO ‘E:\MicrosoftSQLServer\MSSQL10.DPGS00\MSSQL\DATA\data1.MDF’,
MOVE ‘backup2’ TO ‘E:\MicrosoftSQLServer\MSSQL10.DPGS00\MSSQL\DATA\data2.MDF’,
MOVE ‘backup3’ TO ‘E:\MicrosoftSQLServer\MSSQL10.DPGS00\MSSQL\DATA\data3.MDF’,
MOVE ‘backup4 TO ‘E:\MicrosoftSQLServer\MSSQL10.DPGS00\MSSQL\DATA\data4.MDF’;
GO

———————————— Typical recovery scenario ——————————————-

–If the server itself is restored, the SQL Server software directories will be present, but the software will need to be installed over again.

–Look for the error.log file under the SQL Server home (D: drive). This log contains and instance name, paths for data and log files, and the build number. Use the build number and go to this URL (http://sqlserverbuilds.blogspot.com/) to determine what version and what SP needs to be applied.

–Remove the existing SQL Server directories. Take note of directories for options like OLAP and Reporting Services.  These are options that will need to be selected during installation.

–Once the software is installed, restart the new instance in single-user mode (Go to SQL Server Configuration Manager and add ‘;-m’ to the end of the startup parameters for the SQL Server service under its properties)

–Restart the SQL Server Service

–Recover the master and msdb databases from backup

–Restart the SQL Server service after removing the ‘;-m’ from its startup properties in the SQL Server Configuration Manager

–Recover the remaining databases from backup

–Progress check query

SELECT
session_id,
start_time,
status,
command,
percent_complete,
estimated_completion_time /60/1000 as estimate_completion_minutes,
DATEADD(n,(estimated_completion_time /60/1000),GETDATE()) as estimated_completion_time
FROM sys.dm_exec_requests where command = ‘BACKUP DATABASE’ OR command = ‘RESTORE DATABASE’
/

Datapump usage and tips

I published these tips some time ago as illustrated by the 10.1.0.5 version of the help screens below. However, I have been adding tips over the years and I think it has become a pretty comprehensive set of tools. Each section is separated with a line of dashes to make finding certain tips easier since they are in no particular order.  Enjoy.

—————————————————————————————————————————————————————————————–

Export Basics:

create directory jon as ‘/u01/app/oracle’;

expdp ‘”/ as sysdba”‘ directory=jon tables=OFS.MESSAGE_TEXT.month_oct_2008 dumpfile=month_oct_2008.dmp logfile=oct.log
(This exports the month_oct_2008 partition of the ofs.message_text table)

NOTE – To help ensure a read-consistent export, user the FLASHBACK_TIME parameter in the following
manner that uses the current time of the export (using the above example):

expdp ‘”/ as sysdba”‘ directory=jon FLASHBACK_TIME=”TO_TIMESTAMP(TO_CHAR(SYSDATE,’YYYY-MM-DD HH24:MI:SS’),’YYYY-MM-DD HH24:MI:SS’)” tables=OFS.MESSAGE_TEXT.month_oct_2008 dumpfile=month_oct_2008.dmp logfile=oct.log

Help contents:

Export: Release 10.1.0.5.0 – Production on Wednesday, 17 December, 2008 8:43

Copyright (c) 2003, Oracle. All rights reserved.

The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the ‘expdp’ command followed
by various parameters. To specify parameters, you use keywords:

Format: expdp KEYWORD=value or KEYWORD=(value1,value2,…,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword Description (Default)
——————————————————————————
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
TABLES Identifies a list of tables to export – one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description
——————————————————————————
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.

Attaching to an export job:
expdp attach= –You will be prompted for username and password of account that job was started under.
–If the user is SYS, enter ‘sys as sysdba’ for the username

—————————————————————————————————————————————————————————————–

Import Basics:

create directory jon as ‘/u01/app/oracle’;

impdp ‘”/ as sysdba”‘ directory=jon dumpfile=month_oct_2008.dmp tables=ofs.message_text:month_oct_2008 content=data_only
(This imports the data only if the table already exists)

Help content:

Import: Release 10.1.0.5.0 – Production on Wednesday, 17 December, 2008 8:45

Copyright (c) 2003, Oracle. All rights reserved.

The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the ‘impdp’ command followed
by various parameters. To specify parameters, you use keywords:

Format: impdp KEYWORD=value or KEYWORD=(value1,value2,…,valueN)
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

Keyword Description (Default)
——————————————————————————
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to load where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dump, log, and sql files.
DUMPFILE List of dumpfiles to import from (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Import everything from source (Y).
HELP Display help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of import job to create.
LOGFILE Log file name (import.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile.
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to import a subset of a table.
REMAP_DATAFILE Redefine datafile references in all DDL statements.
REMAP_SCHEMA Objects from one schema are loaded into another schema.
REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
SCHEMAS List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE Write all the SQL DDL to a specified file.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION Action to take if imported object already exists.
Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES Identifies a list of tables to import.
TABLESPACES Identifies a list of tablespaces to import.
TRANSFORM Metadata transform to apply (Y/N) to specific objects.
Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description (Default)
——————————————————————————
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
START_JOB=SKIP_CURRENT will start the job after skipping
any action which was in progress when job was stopped.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.

—————————————————————————————————————————————————————————————–

Attaching to an import job:
impdp attach= –You will be prompted for username and password of account that job was started under

NOTE – Use the command exit_client to exit the session, but keep the job running

—————————————————————————————————————————————————————————————–

You can use this query to look at progress of import (or export if you change search string):

select sid, serial#, context,
round(sofar/decode(totalwork,0,1,totalwork)*100,2) “% Complete”,
substr(to_char(sysdate,’yymmdd hh24:mi:ss’),1,15) “Time Now”,
Time_remaining
from v$session_longops
where
opname like ‘%IMPORT%’
and
round(sofar/decode(totalwork,0,1,totalwork)*100,2) < 100;

—————————————————————————————————————————————————————————————–

–Getting a read-consistent dump as of the current sysdate:

–Timestamp
add the argument FLASHBACK_TIME=”to_timestamp(sysdate)” to the datapump export command.

–SCN
SET numwidth 20
SELECT dbms_flashback.get_system_change_number FROM DUAL;

Add the result to the FLASHBACK_SCN argument of the datapump export command.

—————————————————————————————————————————————————————————————–

–Estimating size of dump file:

Schema level export:
expdp “‘/ as sysdba'” schemas=EAI,EDB,ED_PROD,ED_PROD_APPL,ENKITEC,ES,INT_SERVICE,INVSTSEARCH_PROD,ITOPSPORTAL,JAILDB estimate_only=y nologfile=y

Full export:
expdp “‘/ as sysdba'” full=y estimate_only=y nologfile=y

—————————————————————————————————————————————————————————————–

–Data Pump usage with query option

In this example, there was a limited amount of space available on the source server for the export.
Since the 10gR1 documentation on Data Pump export states that you cannot perform a partition export over a database link (Oracle Database Utilities Release 1 (10.1) Part no. B10825-01), I was forced to export the data a little at a time using the Data Pump query feature. The partition key column was a date with a unique formatting (i.e. 12/11/08 7:29:34 PM). The following syntax was used to export the November, 2008 partition, but only the first 10 days worth.

The \ usage was required at end-of-line command continuation and before special characters as this was execute on the command line. Special characters in this case were ( and ) and ‘ but not :

NOTE – The \ usage is ONLY REQURED WHEN EXECUTING THE QUERY OPTION FROM THE COMMAND LINE.
When using the query option in a parfile, it is not required.

expdp ‘”/ as sysdba”‘ tables=OFS.MESSAGE_TEXT.month_nov_2008 dumpfile=nov_1_10.dmp \
logfile=nov_1_10.log directory=jon \
query=ofs.message_text:\”where TO_DATE\(create_date,\’SYYYY-MM-DD HH24:MI:SS\’,\’NLS_CALENDAR=GREGORIAN\’\) \ TO_DATE\(\’2008-12-31 23:59:59\’,\’YYYY-MM-DD HH24:MI:SS\’\)\”

 

—————————————————————————————————————————————————————————————–

–Finding and killing datapump jobs

–If you are in a datpump session, you can to the datapump prompt and simply type ‘kill’ to terminate the current job.
–Otherwise, use this method:

–Find the jobs

SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a11
COL operation LIKE state
COL job_mode LIKE state

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE ‘BIN$%’
ORDER BY 1,2;

–Kill the job you are interested in

SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
— Format: DBMS_DATAPUMP.ATTACH(”,”);
h1 := DBMS_DATAPUMP.ATTACH(‘SYS_EXPORT_SCHEMA_01′,’SYSTEM’);
DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/

–Make sure the job has been killed:

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE ‘BIN$%’
ORDER BY 1,2;

–If that kill job routine does not work, then try:

You need to delete master table/ First find it:

SQL> select owner,object_name,status,object_id from dba_objects where object_name = ‘SYS_EXPORT_SCHEMA_01′;

OWNER OBJECT_NAME STATUS OBJECT_ID
——————– —————————— ——- ———-
DATAPUMP SYS_EXPORT_TABLE_01 VALID 88842

Then drop it:

drop table datapump.SYS_EXPORT_TABLE_01; –Make sure you use the owner identifier from the previous query output.

And check whether it helps:

SQL> select owner_name, job_name, state from dba_datapump_jobs;

—————————————————————————————————————————————————————————————–

–Tracing a Data Pump session (MOS 286496.1)

In the event that there appears to be some issue with your Data Pump utility, you can use the following commands to trace Data Pump import and export sessions

Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of

Export DataPump (expdp) or Import DataPump (impdp). The first three digits enable tracing for a
specific Data Pump component, while the last four digits are usually: 0300.
Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive.

–Example:
TRACE = 04A0300
— or:
TRACE=4a0300

–Some rules to remember when specifying a value for the TRACE parameter:
– do not specify more than 7 hexadecimal digits;
– do not specify the typical leading 0x hexadecimal specification characters;
– do not convert the hexadecimal value to a decimal value;
– omit any leading zero’s (not required though);
– values are not case sensitive.

–Export:
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

–Import:
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

–The trace files will be sent to the background_dump_dest (10g) or the diagnostic_desc (11g) with the
–following format:

— If it is Master Process trace file then generated file name is,
_dm_.trc

— If it is Worker Process trace file then generated file name is,
_dw_.trc

—————————————————————————————————————————————————————————————–

–Viewing the contents of a datapump dumpfile without importing the contents

impdp username/pwd DIRECTORY= DUMPFILE= SQLFILE=

–DATA PUMP: EXCLUDE / INCLUDE TABLE IMPORT Examples

—————————————————————————————————————————————————————————————–

EXCLUDING TABLES DURING DATA_PUMP IMPORT:

impdp USERNAME/PASSWORD schemas=USERNAME directory=DIRECTORY_NAME dumpfile=FILE_NAME.dmp EXCLUDE=TABLE:\”IN \(\’TABLE1\’,\’TABLE2\’,\’TABLE3\’,\’TABLE4\’,\’TABLE5\’,\’TABLE6\’\)\”

impdp ultimus/ultimus schemas=ultimus directory=db_back dumpfile=AFT_EOD.dmp EXCLUDE=TABLE:\”IN \(\’IMG_SIGNATORY_IMAGE\’,\’IMG_SIGNATORY_IMAGE_DEL_LOG\’,\’IMG_SIGNATORY_INFO\’,\’IMG_SIGNATORY_VERIFY_LOG\’,\’IMG_BACH_CLG_HIST_IN\’,\’IMG_BRM_INST\’,\’IMG_NFT_AUTH_LOG\’\)\”

—————————————————————————————————————————————————————————————–

EXCLUDING TABLES DURING DATA_PUMP IMPORT – USING “LIKE ” COMMAND:

impdp USERNAME/PASSWORD schemas=USERNAME directory=DIRECTORY_NAME dumpfile=FILE_NAME.dmp EXCLUDE=TABLE:\”like \’IMG_%\’\” EXCLUDE=TABLE:\”IN \(\’EMP\’,\’DEPT\’\)\”

—————————————————————————————————————————————————————————————–

INCLUDING TABLES DURING DATA_PUMP IMPORT:

impdp USERNAME/PASSWORD schemas=USERNAME directory=DIRECTORY_NAME dumpfile=FILE_NAME.dmp INCLUDE=TABLE:\”IN \(\’EMP\’, \’DEP\’\)\”

The OLD Subquery TRICK:

EXCLUDE :
expdp directory=DB_BACK dumpfile=zakir.dmp EXCLUDE=table:\”in \(select table_name from all_tables where table_name like \’IMG_%\’ or table_name like \’%HIST%\’\)\”

INCLUDE :
expdp directory=DB_BACK dumpfile=zakir.dmp INCLUDE=table:\”in \(select table_name from all_tables where table_name like \’IMG_%\’ or table_name like \’%HIST%\’\)\”

—————————————————————————————————————————————————————————————–

–Killing or stopping a running datapump job

The difference between Kill and Stop is simple to explain. When killing a job, you won’t be able to
resume or start it again. Also logs and dumpfiles will be removed!

When exporting (or importing), press Ctrl-c to show the datapump prompt and type KILL_JOB or STOP_JOB[=IMMEDIATE].
You will be prompted to confirm if you are sure…

Adding ‘=IMMEDIATE‘ to STOP_JOB will not finish currently running ‘sub-job’ and must be redone when starting it again.

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
[Ctrl-c]
Export> KILL_JOB
..or..
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([yes]/no): yes

—————————————————————————————————————————————————————————————–

Resuming a stopped job

Identify your job with SQL or you already knew it because you used ‘JOB_NAME=‘ 😉

SELECT owner_name, job_name, operation, job_mode, state
FROM dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE
———- ——————– ———- ———- ————
SYSTEM EXP_FULL EXPORT FULL NOT RUNNING

Now we can ATTACH to the job using it as a parameter to the expdp or impdp command, and a lot of gibberish is shown:

> expdp system ATTACH=EXP_FULL

Job: EXP_FULL
Owner: SYSTEM
Operation: EXPORT
Creator Privs: TRUE
GUID: A5441357B472DFEEE040007F0100692A
Start Time: Thursday, 08 June, 2011 20:23:39
Mode: FULL
Instance: db1
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/******** full=y JOB_NAME=EXP_FULL
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/app/oracle/admin/db1/dpdump/expdat.dmp
bytes written: 520,192

Worker 1 Status:
Process Name: DW00
State: UNDEFINED

(Re)start the job with START_JOB, use ‘=SKIP_CURRENT‘ if you want to skip the current job.
To show progress again, type CONTINUE_CLIENT (Job will be restarted if idle).

Export> START_JOB[=SKIP_CURRENT]
Export> CONTINUE_CLIENT
Job EXP_FULL has been reopened at Thursday, 09 June, 2011 10:26
Restarting “SYSTEM”.”EXP_FULL”: system/******** full=y JOB_NAME=EXP_FULL

Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE