Just yesterday I encountered a wrong conclusion based on a misunderstanding about the use of the dbms_output.put_line function. The writer of a SQL script that spooled its output to a log file used the function to generate ‘start’ and ‘end’ messages within a PL/SQL block and was confused when the ‘start’ message was not seen in the log file while the main section of the PL/SQL block was executing. He was under the assumption that calls to this function were carried out immediately.
This is a common misconception regarding dbms_output.put_line. This example of the same functionality they were attempting reinforces this conclusion.
set serveroutput on spool ./output.log begin dbms_output.put_line('Start - '||to_char(sysdate,'MM/DD/YYYY HH24:MI:SS')); execute immediate 'select * from tab'; dbms_lock.sleep(30); dbms_output.put_line('End - '||to_char(sysdate,'MM/DD/YYYY HH24:MI:SS')); end; / spool off
In this example, the spooling of the output is started inside the PL/SQL block. This code will execute for about 30 seconds because of the dbms_lock.sleep function call. During execution, the output.log file will be empty until the code finishes, either successfully or unsuccessfully, and ‘spool off’ is executed. You can verify this with a ‘tail -f’ command in Linux/Unix or the ‘type’ command in Windows.
What happens if you place the ‘start’ and ‘end’ messages outside the PL/SQL block?
set serveroutput on spool ./output.log select 'Start - '||to_char(sysdate,'MM/DD/YYYY HH24:MI:SS') from dual; begin execute immediate 'select * from tab'; dbms_lock.sleep(30); end; / select 'End - '||to_char(sysdate,'MM/DD/YYYY HH24:MI:SS') from dual; spool off
Even though you see the ‘start’ message at the onset in your SQL session, the log file will still not be populated until the PL/SQL block execution is finished and the ‘spool off’ command runs.
If you truly want to see the ‘start’ message before ANY code executes, then using OS commands to start spooling a log file and generate the ‘start’ message is probably your only alternative. Here is an example for a Linux-based system:
echo "Start - $(date)" > ./output.log
This command, executed before the script is called, will create the log file and write a ‘start’ message to the file with the current date and time in the current directory. This information will show up in the log file immediately. Next, call the script to be executed:
$ORACLE_HOME/bin/sqlplus -S username/password @script.sql >> ./output.log
This command will start a SQL*Plus session in silent mode (-S), suppressing header and prompt output, call the script.sql script, and send the output to the log file opened earlier. Using this method, you don’t need to use any ‘spool’ commands in the script. Notice I used “>>” to redirect the output in this command and not “>” for the command when I opened the log file. The “>>” redirect appends to an existing file, opening it if it does not exist. If you use “>” for the redirect, an existing file will be overwritten and you will lose any information previously written to the file.
For a Windows-based system, the command is almost the same:
echo Start - %date% %time% > ./output.log
This command, executed before the script is called, will create the log file and write a ‘start’ message to it with the date and time in the current directory. This information will show up in the log file immediately. The “>>” and “>” redirect options even work the same as a Linux/Unix environment.
From here there a few different ways to execute a SQL script from a Windows command prompt. If this is going to be an ongoing process in Windows, then I recommend writing the scripts in Perl. It will save time, especially if you have to port the code between Windows and Linux/Unix environments.
One last point. You should not use this ‘start’ and ‘end’ message method in a production system. While it is fine for time tracking while testing the script, it should not be relied on outside of a test environment. Even when these messages are sent to a log file, that file needs to be parsed for error messages at the conclusion of the process. A better solution is to send the messages to a table used for auditing and monitoring. Just don’t forget to commit each time you send a message to the table and remember to purge the information from the table based on audit requirements.