Plan for the best or plan for the worst?

Colima Volcano shows a powerful night explosion with lightning, ballistic projectiles and incandescent rockfalls; image taken in the Comala municipality in Colima, Mexico December 13, 2015.


I recently had to modify existing code to enable results to be published from a data guard primary database to an active standby database in real time. The change involved adding a piece of code to perform a checkpoint on the primary when the code completed.  This would immediately post the changes to the active standby.

The third-party vendor of the code preferred to have the checkpoint creation executed by a trigger when their code wrote to a log table. No problem, right? My only concern was that if the log table was used to store ‘failure’ messages as well as ‘success’ messages . Adding some additional code in the trigger could check for and fire only on ‘success’ messages, though. I thought this was the plan until I heard that the log table only stores ‘success’ messages. Finally, I was asked to put a delay of two minutes in the trigger before the trigger code was executed.  I started getting suspicious because there is something wrong when you have to forcibly delay the execution of code. When I asked why, I was told that the log table record was inserted at the start of the process and not the end and the delay was needed to ensure the process completed before the trigger executed the checkpoint. It then became clear why they wanted to have a solution that involved me making code modifications and not them.

To recap, the code inserts a ‘successful completion’ message as the first step of the execution, then the trigger fires, waits for two minutes, and does a checkpoint to publish the results to the standby. This is done regardless of the code outcome.

The thinking that went into writing this code is what I refer to as ‘planning for the best’. This is the mindset that if the code is written right, it can’t possibly fail. Code writers that have been in the business for more that a few years know you cannot plan for everything, so they tend to write code with the ‘plan for the worst’ mindset. This involves writing exceptions and error handlers into the code.  As such, the code is more robust, adaptable, and less prone to failure or having to add tweaks such as delays to make it work well. I almost got whiplash from all the head shaking I did on this one.  Just saying.

2 thoughts on “Plan for the best or plan for the worst?

  1. Totally agree with you. I run into a similar situation when tuning a database for a client. The client was using a 3rd party software. The software was running slower and slower to a point that client can not tolerate anymore. I checked it out and noticed the bottleneck was in a message table, which is supposed to hold only a few thousands rows at most according to the design of the software. But this table has never done the purge of old messages and contains the messages of 10+ million rows (basically one and half years history since the software in production). After suggesting to client to contact with software vendor to purge the table, the software vendor agreed and said 99+% of rows in the table were marked “logically deleted” and can be deleted. Never heard “logically deleted” concept before, the oracle join would still pick up all rows physically exists in the table. Anyway after the purge, the software run lighting fast.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s