Optimizer hints: silver bullet or band-aid?

band-aidIf you are familiar with Oracle optimizer hints, you have heard that they can be a way to force immediate performance improvement, but are not meant to be a long-term solution.  The reason that they are not a long-term solution, in my opinion, is that a database is a constantly changing entity.  These changes include data density (amount), data skew (from inserts, updates, and deletes), and the addition or subtraction of objects (tables, views, packages, and procedures).  This is the same reason that performance tuning is not a ‘once and done’ process.

I recently had the opportunity to experience the use of optimizer hints as a (supposed) silver bullet and as a performance detriment months later.   I was tuning a query in third-party supplied code and found through the explain plan that it was selecting vastly more records than originally thought.  It was actually using index range scans to return 38 million records.  This amount of data is best obtained through the multi-block access of a full table scan then a sequential index scan.  The system did not have the tuning and diagnostics pack feature licensed, so SQL profiles and baselines were not an option.  The vendor changing the code was also not an option.  I finally used several full table scan hints in one of the sub-queries that was performing poorly.  The result was improved execution time of the process.  While thanks were going around, I cautioned that the use of these hints was not a long-term solution and that a code rewrite was likely the best solution overall.  I had the feeling that my caution fell on deaf ears.  This was confirmed when over the next couple of months, I was asked to apply the ‘fix’ to other environments where the performance problem appeared.

Last week, I was asked to review one of the environments where performance was poor, but the ‘fix’ was already present.  How could this be?  Wasn’t the ‘fix’ the silver bullet for this problem?  I resisted the the urge to say ‘I told you so’ and started my tuning exercise.  As a side note, when approaching poorly-performing code with embedded optimizer hints, you should always remove the hints to start your tuning from an unbiased perspective.  After removing the ‘fix’, I asked the users to check the performance.  Guess what?  Performance returned to an acceptable level.

So, in the course of a couple of months, I had seen that the optimizer hints were actually a band-aid to be used to temporarily remedy a performance problem until a long-term solution can be found and not a silver bullet.  Another piece of advice I might offer is that if you do indeed end up using optimizer hints, keep track of where you place them as you may need to go back again and remove them.  Happy tuning.


Plan for the best or plan for the worst?

Colima Volcano shows a powerful night explosion with lightning, ballistic projectiles and incandescent rockfalls; image taken in the Comala municipality in Colima, Mexico December 13, 2015.


I recently had to modify existing code to enable results to be published from a data guard primary database to an active standby database in real time. The change involved adding a piece of code to perform a checkpoint on the primary when the code completed.  This would immediately post the changes to the active standby.

The third-party vendor of the code preferred to have the checkpoint creation executed by a trigger when their code wrote to a log table. No problem, right? My only concern was that if the log table was used to store ‘failure’ messages as well as ‘success’ messages . Adding some additional code in the trigger could check for and fire only on ‘success’ messages, though. I thought this was the plan until I heard that the log table only stores ‘success’ messages. Finally, I was asked to put a delay of two minutes in the trigger before the trigger code was executed.  I started getting suspicious because there is something wrong when you have to forcibly delay the execution of code. When I asked why, I was told that the log table record was inserted at the start of the process and not the end and the delay was needed to ensure the process completed before the trigger executed the checkpoint. It then became clear why they wanted to have a solution that involved me making code modifications and not them.

To recap, the code inserts a ‘successful completion’ message as the first step of the execution, then the trigger fires, waits for two minutes, and does a checkpoint to publish the results to the standby. This is done regardless of the code outcome.

The thinking that went into writing this code is what I refer to as ‘planning for the best’. This is the mindset that if the code is written right, it can’t possibly fail. Code writers that have been in the business for more that a few years know you cannot plan for everything, so they tend to write code with the ‘plan for the worst’ mindset. This involves writing exceptions and error handlers into the code.  As such, the code is more robust, adaptable, and less prone to failure or having to add tweaks such as delays to make it work well. I almost got whiplash from all the head shaking I did on this one.  Just saying.