Python and cx_Oracle for Oracle database connections

In my latest adventures, I have been improving my Python skills in regards to running Oracle SQL and PL/SQL code.  Once I got the process down for basic connection and execution, a database migration including new LDAP connectivity threw me for a loop until I got it figured out.

First, a little background on cx_oracle.  cx_Oracle is a Python extension module that enables access to Oracle databases. It conforms to the Python database API 2.0 specification with a considerable number of additions and a couple of exclusions.  cx_Oracle 7 has been tested with Python version 2.7, and with versions 3.5 through 3.8. You can use cx_Oracle with Oracle 11.2, 12, 18 and 19 client libraries. Oracle’s standard client-server version functionality allows connection to both older and newer databases. For example Oracle 19c client libraries can connect to Oracle Database 11.2.

To check if you have the cx_oracle module installed in your current Python environment, you can simply create a Python script with the line import cx_Oracle and execute it with python <filename> or execute this command from the Python command line.  If there are no errors, then the module is likely installed.

If not, you can install it in Linux with the command with the pip (python installs packages) module:

python -m pip install cx_Oracle --upgrade

If you receive an error about the pip module being unavailable, you can install it with the command:

yum install python-pip

Now that we know what cx_oracle is for and how to install it, here is an example of using it to connect to an Oracle RAC system:

import sys
import os
import cx_Oracle

dsnStr = cx_Oracle.makedsn('<SCAN_NAME>','<LISTEN_PORT>',service_name='<SERVICE_NAME>')
con = cx_Oracle.connect(user='<USER>',password='PWD',dsn=dsnStr)
<SQL or PL/SQL to execute>

Seems pretty simple.  This changed when we tried to configure a connection similar to the one above for a database using LDAP authentication.  The new sqlnet.ora and ldap.ora files where in place, but every connection attempt resulted in a ORA-12154 error.  In my particular case, Oracle instant client 12.2 and (regular) client 18c were installed on the server where these scripts were executed.  Even placing the necessary .ora network files in both locations resulted in the same errors.

The python command print os.environ will give you a list of all the environment variables that are set in the current environment.  Note – be sure to use the import os command to import the proper libraries first.  This was an important finding for me since I was using the Jenkins development environment and the actual execution user was not something I could see.  Using this command, I found out the identity of the OS user whom the python scripts were executing under.  After getting access to the development server, this account did not have any of the Oracle environment settings associated with it such as ORACLE_HOME, TNS_ADMIN, or LD_LIBRARY_PATH.  My next step was to set these under the user account on the server.  Even so, the attempts to connect to the Oracle database resulted in the same ORA-12154 error.  As a curious side, the print os.environ command did not list these variables after setting them at the OS level for the user account.

Now what?

Well, if you could pull up the environment variables using os.environ, couldn’t you set them as well?  Setting the values of ORACLE_HOME, TNS_ADMIN, and LD_LIBRARY_PATH in the Python script turned out to be the answer and ensured I was pointing to the correct Oracle client binaries.  Here is the resulting code:

import sys
import os
import cx_Oracle

os.environ['ORACLE_HOME'] = '/opt/oracle/version'
os.environ['TNS_ADMIN'] = '/opt/oracle/version/network/admin'
os.environ['LD_LIBRARY_PATH'] = '/opt/oracle/version/lib'

con = cx_Oracle.connect('{}/{}@{}'.format('<USER>','<PWD>','<LDAP_DESIGNATOR>'))
<SQL or PL/SQL to execute>

Also, while working to get the python script to recognize the Oracle environment variable settings at the OS level, I discovered the use of the Python subprocess module in regards to calling scripts at the OS level.  This was a work around that could take advantage of the OS account environment settings.  However, I was told that even if username and password credentials were masked as they were passed in, they could be displayed by executing ps -ef on the server.  I have not been able to verify this, but it could be a security issue if executing Python scripts against production databases.

import os 
import subprocess 

output = subprocess.check_output(['<PATH_AND_SCRIPT_NAME',str('<ARG1>'),str('<ARG12')]) 

If the script called executes SQL or PL/SQL, it is a good idea to use the WHENEVER SQLERROR EXIT 1 clause to ensure that any errors cause a failure and are captured.



SQL Server Full Backup Verification

As database administrators, our first focus in our production environments should be backup and recovery.  I have mentioned this before in previous posts and it can never be said too much.  Sure, you can backup your database left and right, but unless you verify the backups with practice recovery or some other type of verification, your backup and recovery plan is unreliable.

SQL Server author Steve Rezhener ( has just published a scripted backup verification process that I thought was interesting.  I tested this script on a SQL Server 2019 Developer Edition lab environment and wanted to pass this process on along with my comments.

This verification script performs the following actions:

  1. Gets the latest full backup of the user-designated database
  2. Checks if there is enough free space for the trial recovery of the user-designated database
  3. Restores the latest full backup into a copy database
  4. The copy database is checked for corruption
  5. The copy database is removed
  6. The check results are placed into a temporary table, published, then dropped

Here are the contents of the published script.

DECLARE @DBName   AS VARCHAR(100) = 'DemoDB'
DECLARE @SQLString NVARCHAR(500), @BackupFileName AS NVARCHAR(500)
DECLARE @ParmDefinition nvarchar(500)
DECLARE @Restore_DBName AS VARCHAR(100) = @DBName
DECLARE @DataFileSpaceTaken_GB AS INT, @FreeDriveSpace_GB AS INT
DECLARE @OriginalDataFileName AS VARCHAR(500), @OriginalLogFileName AS VARCHAR(500)-- just in case
IF EXISTS(SELECT database_id FROM sys.databases WHERE name=@DBName)
SELECT TOP 1 @BackupFileName = bakfams.Physical_Device_Name
, @DataFileSpaceTaken_GB = masfiles.size/1024/1024/1024
, @FreeDriveSpace_GB = osvols.available_bytes/1024/1024/1024
, @OriginalDataFileName = masfiles.physical_name
, @OriginalLogFileName = masfiles2.physical_name
FROM sys.databases AS dats
INNER JOIN msdb.dbo.backupset AS baks    ON baks.database_name =
INNER JOIN msdb.dbo.backupmediafamily AS bakfams ON baks.media_set_id = bakfams.media_set_id
INNER JOIN sys.master_files AS masfiles    ON masfiles.database_id = dats.database_id AND masfiles.type_desc = 'ROWS'
CROSS APPLY sys.dm_os_volume_stats(masfiles.database_id, masfiles.file_id) AS osvols
LEFT OUTER JOIN sys.master_files AS masfiles2  ON masfiles2.database_id = dats.database_id AND masfiles2.type_desc = 'LOG'
AND = @DBName
AND baks.[type]='D'
ORDER BY baks.backup_finish_date DESC
PRINT @BackupFileName
SET @Restore_DBName = @Restore_DBName + '_' + DATENAME(MONTH,GETDATE())
SET @Restore_DBName = @Restore_DBName + CONVERT(VARCHAR(2),DAY(GETDATE()))
SET @Restore_DBName = @Restore_DBName + CONVERT(VARCHAR(4),YEAR(GETDATE()))
PRINT @Restore_DBName

PRINT @DataFileSpaceTaken_GB
PRINT @FreeDriveSpace_GB
IF (@FreeDriveSpace_GB<@DataFileSpaceTaken_GB*2) BEGIN PRINT '-- not enough space --'; RETURN; END
SET @SQLString = 'RESTORE DATABASE [' + @Restore_DBName + ']'
SET @SQLString = @SQLString + ' FROM DISK = N''' + @BackupFileName + ''''
SET @SQLString = @SQLString + ' WITH FILE = 1,'
SET @SQLString = @SQLString + ' MOVE N''' + @DBName + ''''
SET @SQLString = @SQLString + ' TO N''' + REPLACE(@OriginalDataFileName,@DBName,@Restore_DBName) + ''''
SET @SQLString = @SQLString + ', MOVE N''' + @DBName + '_log'''
SET @SQLString = @SQLString + ' TO N''' + REPLACE(@OriginalLogFileName,@DBName,@Restore_DBName) + ''''
SET @SQLString = @SQLString + ', NOUNLOAD, REPLACE, STATS = 10'
EXECUTE sp_executesql @SQLString
SET @SQLString = 'DBCC CHECKDB ('+ @Restore_DBName + ')'
EXECUTE sp_executesql @SQLString
SET @SQLString = 'DROP DATABASE ' + @Restore_DBName
EXECUTE sp_executesql @SQLString
PRINT '—-7. CREATE TEMP winlog TABLE --'
IF OBJECT_ID('tempdb..#winlog') !=0 DROP TABLE #winlog
, ProcessInfo  VARCHAR(50)
, textRow  VARCHAR(4000)
INSERT INTO #winlog(LogDate, ProcessInfo, textRow)
EXEC master.dbo.xp_readerrorlog
DECLARE @textRow nvarchar(500), @1stRowID AS INT = 0
SET @SQLString = 'SELECT TOP 1 @x1stRowID = rowID'
SET @SQLString = @SQLString + ' FROM #winlog'
SET @SQLString = @SQLString + ' WHERE textRow = ''Starting up database ''''' + @Restore_DBName + '''''.'''
SET @SQLString = @SQLString + ' ORDER BY rowID DESC'
SET @ParmDefinition = N'@x1stRowID AS INT OUTPUT';
EXECUTE sp_executesql @SQLString, @ParmDefinition, @x1stRowID = @1stRowID OUTPUT;
SET @SQLString = 'SELECT *'
SET @SQLString = @SQLString + ' FROM #winlog'
SET @SQLString = @SQLString + ' WHERE RowID >= @xRowID'
SET @SQLString = @SQLString + ' ORDER BY rowID DESC'
SET @ParmDefinition = N'@xRowID INT';
EXECUTE sp_executesql @SQLString, @ParmDefinition, @xRowID = @1stRowID
DROP TABLE #winlog

When executed in my lab environment, this is what I saw (the latest entries are first):

rowID LogDate ProcessInfo textRow
105 2019-11-21 07:37:28.030 spid56 DBCC CHECKDB (Hospital_November212019) WITH no_infomsgs 
executed by WIN-NVFHQDPGP9D\Administrator found 0 errors and repaired 0 errors. 
Elapsed time: 0 hours 0 minutes 0 seconds. Internal database snapshot has split 
point LSN = 00000022:00000b2f:0001 and first LSN = 00000022:00000b2d:0001. 

104 2019-11-21 07:37:27.810 Backup RESTORE DATABASE successfully processed 449 pages 
in 0.029 seconds (120.740 MB/sec).

103 2019-11-21 07:37:27.780 Backup Database was restored: Database: Hospital_November212019, 
creation date(time): 2019/01/03(14:07:43), first LSN: 34:2822:1, last LSN: 34:2825:1, 
number of dump devices: 1, device information: (FILE=1, TYPE=DISK: 
{'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\Hospital.bak'}). 
Informational message. No user action required.

102 2019-11-21 07:37:27.760 Backup Restore is complete on database 'Hospital_November212019'. The database 
is now available.

101 2019-11-21 07:37:27.620 spid56 Parallel redo is shutdown for database 'Hospital_November212019' 
with worker pool size [1].

100 2019-11-21 07:37:27.530 spid56 Parallel redo is started for database 'Hospital_November212019' 
with worker pool size [1].

99 2019-11-21 07:37:27.500 spid56 Starting up database 'Hospital_November212019'.

Note that this script only works for full and not incremental backups.  The script needs to be able to find at least one full backup or it will fail.

My own raspberry pi cloud

When I was first working with computers, I used to build and upgrade my own desktop PCs.  This was back when finished PCs were pricey and it was cheaper to build your own if you were willing to put in the time and effort.  There were times of frustration but ultimately a sense of accomplishment when you were finished.  A lot of knowledge of the inner workings of a PC as well as tuning tips and tricks were earned as well.

Many years later, hardware is still relatively cheap, but a finished PC is as well.  Laptops have also replaced desktops in many situations now that their pricing has become more affordable.  Now it is typically game players and hobbyists whom still actually build desktops.

I gave up building my own desktops long ago, but I still have one old Gateway PC for playing old FPS games like Doom when I’m feeling nostalgic.  When it finally conks out, I will probably have to set up a virtual machine or two for this task.

I started playing with virtual machines back when I wanted to work with versions of Oracle when they were first introduced.  As any Oracle veteran knows, the AIX, Unix, and Linux versions would always be rolled out first and it would be much later when the Windows versions were introduced.  Not wanting to wait for the Windows versions, I started using VMWare and eventually VirtualBox to start working with the Unix versions.

Over the years, I have massed quite a lot of VMs, not wanting to create them over and over to work with specific Oracle products and operating systems.  Colleagues have asked me for copies of some of these VMs from time to time and the size of them makes it difficult to post or send them.  This is also true for files, scripts, and toolkits that we as consultants carry around with us between client engagements.

The introduction of the Raspberry Pi has opened a new world of low-cost computing power that can be used in a variety of ways and I see new projects posted all the time – everything from weather stations to smart home devices.

I recently saw a project for a home cloud using a raspberry pi device that can be configured to upload and download files remotely and saw this as a way to post my VMs for my colleagues that were interested.  Since I have an old (version 2) raspberry pi laying around that I unsuccessfully tried to turn into a NAS, I thought it would be perfect for this use.

The process to create my own cloud was found here – and the process allowed me to get my own cloud set up.

A word of caution, though.  Since this is not a highly-secured platform, do not place any sensitive files on this cloud.  If it is on your home network, it will require a port forward setting on your router to enable outside access.  On the positive side, you can have links secured with passwords for both folders and individual files to share with others.

Update 12/17/19

While having the personal cloud was great, there was a problem uploading large files to my cloud.  The files in question would be several GB in size and the upload process through the web interface just could not handle them.  I found a blog on manually adding files to owncloud (, but after adding the files, the occ scan command to add the new files to the owncloud database failed with the message “This version of ownCloud is not compatible with PHP 7.3 You are currently running PHP 7.3.0.”.  The next higher available version of owncloud, 10.3.2, seemed to have the same problem according to my online research.

At the time of creating this environment, owncloud was on version 10.2.14 and I had also installed PHP 7.3 as well as PHP 7.2, with 7.3 being the default version in use. Version 7.2 did not seem to have this problem on owncloud, so I had to alter the rapsberry pi Debian environment to use PHP version 7.2 instead.

Logged into the rapsberry pi as the default pi user, I started with the “which php” command to find where it was executing PHP from.  It pointed to /usr/bin/php, which then pointed to the link /etc/alternatives/php, and finally to /usr/bin/php7.3.

As PHP version 7.2 was also installed, the php7.2 executable was also located under /usr/bin.  I then removed the /etc/alternatives/php link with a simple rm command, then created a new link pointing to the 7.2 executable – “ln -s /usr/bin/php7.2 /etc/alternatives/php

To test the process, I manually copied files to the external drive using this process:

  • Power down the raspberry pi, remove my external USB drive
  • Copy a folder into the files# directory of the default user account on the USB drive.  The name of this user is the admin account that was set up during initial owncloud configuration.
  • Plug the USB drive back into the raspberry pi and power it up.
  • Log in as the pi user account, open a terminal, and go to /var/www/owncloud
  • Execute the scan command “sudo -u www-data php occ files:scan –all
  • At the end of the scan, the new files were visible through the owncloud web interface

Update 1/10/20

After finding the process above for manually adding files to the external drive, I thought about have the file scan process execute on boot up to save time when adding files.  I created a file under /usr/bin on my raspberry pi called file-scan with permissions 744 as root with the following two commands:

cd /var/www/owncloud

sudo -u www-data php occ files:scan --all

I then edited the /etc/rc.local file and added the line /usr/bin/file-scan and now the file scan process executes every time I boot up.

MySQL 8 test environment setup

Having just passed the newly-revised MySQL 2019 Implementation Specialist certification (, I thought I would pass on some of the MySQL information in the form of an introduction for those thinking about creating an environment for experimentation.

My test environment will be created on an Oracle Enterprise Linux (OEL) version 7 update 7 virtual machine running under Oracle VirtualBox version 6.0.10 because I like doing things as cheaply as possible.

There are numerous articles on creating OEL virtual machines, so let me just say that OEL is available freely through Oracle E-delivery after creating a free account and Oracle VirtualBox is free as well and runs quite well under Windows 10.  For the Mac users out there, you will have to find your own virtual machine hosting utility.  Sorry, Mac.

My OEL7 VM is a simple construct of 2 CPU cores, 5GB of memory, and a 100GB dynamically sized HDD.  You should not need that much space, but I tend to size mine large and let them grow if needed.

Once the OEL7 VM is created and running, it needs to be able to access the internet to download the MySQL repository package as well as the actual MySQL software during the installation process.  Also, if you are not installing the software as the root user, the account you will be using will need sudo privileges.

Inside the VM, go to and download the desired repository RPM package.  An Oracle web account will be needed prior to being able to download.  You can use the one you created for the E-delivery download earlier if necessary.  Install the package with the command: sudo rpm -Uvh <package_name>.

Execute the following command to check the available MySQL versions from the newly-installed repository:

[jonadams@localhost Downloads]$ sudo yum repolist all | grep mysql

mysql-cluster-7.5-community/x86_64 MySQL Cluster 7.5 Community disabled
mysql-cluster-7.5-community-source MySQL Cluster 7.5 Community - disabled
mysql-cluster-7.6-community/x86_64 MySQL Cluster 7.6 Community disabled
mysql-cluster-7.6-community-source MySQL Cluster 7.6 Community - disabled
mysql-cluster-8.0-community/x86_64 MySQL Cluster 8.0 Community disabled
mysql-cluster-8.0-community-source MySQL Cluster 8.0 Community - disabled
mysql-connectors-community/x86_64 MySQL Connectors Community enabled: 118
mysql-connectors-community-source MySQL Connectors Community - disabled
mysql-tools-community/x86_64 MySQL Tools Community enabled: 95
mysql-tools-community-source MySQL Tools Community - Sourc disabled
mysql-tools-preview/x86_64 MySQL Tools Preview disabled
mysql-tools-preview-source MySQL Tools Preview - Source disabled
mysql55-community/x86_64 MySQL 5.5 Community Server disabled
mysql55-community-source MySQL 5.5 Community Server - disabled
mysql56-community/x86_64 MySQL 5.6 Community Server disabled
mysql56-community-source MySQL 5.6 Community Server - disabled
mysql57-community/x86_64 MySQL 5.7 Community Server disabled
mysql57-community-source MySQL 5.7 Community Server - disabled
mysql80-community/x86_64 MySQL 8.0 Community Server enabled: 129
mysql80-community-source MySQL 8.0 Community Server - disabled

You can see which versions and tools are enabled. The enabled ones will typically be
the newest versions.  Install the enabled community server:

[jonadams@localhost Downloads]$ sudo yum install mysql-community-server -y
Loaded plugins: langpacks, ulninfo
mysql-connectors-community | 2.5 kB 00:00:00
mysql-tools-community | 2.5 kB 00:00:00
mysql80-community | 2.5 kB 00:00:00
ol7_UEKR5 | 2.5 kB 00:00:00
ol7_latest | 2.7 kB 00:00:00
(1/4): ol7_UEKR5/x86_64/updateinfo | 50 kB 00:00:00
(2/4): ol7_latest/x86_64/updateinfo | 1.1 MB 00:00:01
(3/4): ol7_UEKR5/x86_64/primary_db | 6.6 MB 00:00:01
(4/4): ol7_latest/x86_64/primary_db | 20 MB 00:00:02
Resolving Dependencies
--> Running transaction check
---> Package mysql-community-server.x86_64 0:8.0.17-1.el7 will be installed
--> Processing Dependency: mysql-community-common(x86-64) = 8.0.17-1.el7 for package: mysql-community-server-8.0.17-1.el7.x86_64
--> Processing Dependency: mysql-community-client(x86-64) >= 8.0.11 for package: mysql-community-server-8.0.17-1.el7.x86_64
--> Running transaction check
---> Package mysql-community-client.x86_64 0:8.0.17-1.el7 will be installed
--> Processing Dependency: mysql-community-libs(x86-64) >= 8.0.11 for package: mysql-community-client-8.0.17-1.el7.x86_64
---> Package mysql-community-common.x86_64 0:8.0.17-1.el7 will be installed
--> Running transaction check
---> Package mariadb-libs.x86_64 1:5.5.64-1.el7 will be obsoleted
--> Processing Dependency: for package: 2:postfix-2.10.1-7.el7.x86_64
--> Processing Dependency: for package: 2:postfix-2.10.1-7.el7.x86_64
---> Package mysql-community-libs.x86_64 0:8.0.17-1.el7 will be obsoleting
--> Running transaction check
---> Package mysql-community-libs-compat.x86_64 0:8.0.17-1.el7 will be obsoleting
--> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
mysql-community-libs x86_64 8.0.17-1.el7 mysql80-community 3.0 M
replacing mariadb-libs.x86_64 1:5.5.64-1.el7
mysql-community-libs-compat x86_64 8.0.17-1.el7 mysql80-community 2.1 M
replacing mariadb-libs.x86_64 1:5.5.64-1.el7
mysql-community-server x86_64 8.0.17-1.el7 mysql80-community 415 M
Installing for dependencies:
mysql-community-client x86_64 8.0.17-1.el7 mysql80-community 32 M
mysql-community-common x86_64 8.0.17-1.el7 mysql80-community 589 k

Transaction Summary
Install 3 Packages (+2 Dependent packages)

Total size: 453 M
Total download size: 415 M
Downloading packages:
No Presto metadata available for mysql80-community
warning: /var/cache/yum/x86_64/7Server/mysql80-community/packages/mysql-community-server-8.0.17-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Public key for mysql-community-server-8.0.17-1.el7.x86_64.rpm is not installed
mysql-community-server-8.0.17-1.el7.x86_64.rpm | 415 MB 00:01:24
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Importing GPG key 0x5072E1F5:
Userid : "MySQL Release Engineering <>"
Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
Package : mysql80-community-release-el7-3.noarch (installed)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Installing : mysql-community-common-8.0.17-1.el7.x86_64 1/6
Installing : mysql-community-libs-8.0.17-1.el7.x86_64 2/6
Installing : mysql-community-client-8.0.17-1.el7.x86_64 3/6
Installing : mysql-community-server-8.0.17-1.el7.x86_64 4/6
Installing : mysql-community-libs-compat-8.0.17-1.el7.x86_64 5/6
Erasing : 1:mariadb-libs-5.5.64-1.el7.x86_64 6/6
Verifying : mysql-community-libs-8.0.17-1.el7.x86_64 1/6
Verifying : mysql-community-server-8.0.17-1.el7.x86_64 2/6
Verifying : mysql-community-common-8.0.17-1.el7.x86_64 3/6
Verifying : mysql-community-client-8.0.17-1.el7.x86_64 4/6
Verifying : mysql-community-libs-compat-8.0.17-1.el7.x86_64 5/6
Verifying : 1:mariadb-libs-5.5.64-1.el7.x86_64 6/6

mysql-community-libs.x86_64 0:8.0.17-1.el7 mysql-community-libs-compat.x86_64 0:8.0.17-1.el7
mysql-community-server.x86_64 0:8.0.17-1.el7

Dependency Installed:
mysql-community-client.x86_64 0:8.0.17-1.el7 mysql-community-common.x86_64 0:8.0.17-1.el7

mariadb-libs.x86_64 1:5.5.64-1.el7


Check the status of the mysqld service:

Start the mysqld service and check the status:


[jonadams@localhost Downloads]$ sudo systemctl start mysqld.service

[jonadams@localhost Downloads]$ sudo systemctl status mysqld.service
● mysqld.service – MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-09-16 13:42:19 MDT; 5min ago
Docs: man:mysqld(8)
Process: 5098 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 5183 (mysqld)
Status: “Server is operational”
Tasks: 39
CGroup: /system.slice/mysqld.service
└─5183 /usr/sbin/mysqld

Sep 16 13:42:13 localhost.localdomain systemd[1]: Starting MySQL Server...
Sep 16 13:42:19 localhost.localdomain systemd[1]: Started MySQL Server.

Get the temporary root password created during the first startup of the MySQL server:

[jonadams@localhost Downloads]$ sudo grep 'temporary password' /var/log/mysqld.log
2019-09-16T19:42:15.986474Z 5 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: 3dh>C0N!<Tpg

Change the root password as soon as possible by logging in with the generated,
temporary password and set a custom password for the superuser account:

shell> mysql -uroot -p

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'SemperFidelis1!';
Query OK, 0 rows affected (0.01 sec)

Note – MySQL’s validate_password plugin is installed by default. This will require that passwords contain at least one upper case letter, one lower case letter, one digit, and one special character, and that the total password length is at least 8 characters.

[jonadams@localhost Downloads]$ mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.17

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.


mysql> quit

Stop the mysqld service and check the status

[jonadams@localhost tmp]$ sudo systemctl stop mysqld.service
[jonadams@localhost tmp]$ sudo systemctl status mysqld.service
● mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2019-09-16 17:45:44 MDT; 7s ago
Docs: man:mysqld(8)
Process: 5183 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
Process: 5098 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 5183 (code=exited, status=0/SUCCESS)
Status: "Server shutdown complete"

Sep 16 13:42:13 localhost.localdomain systemd[1]: Starting MySQL Server...
Sep 16 13:42:19 localhost.localdomain systemd[1]: Started MySQL Server.
Sep 16 17:45:42 localhost.localdomain systemd[1]: Stopping MySQL Server...
Sep 16 17:45:44 localhost.localdomain systemd[1]: Stopped MySQL Server.

Note that the entries are timestamped so at first glance it appears that the
service was started before it was shut down, but according to the timestamps,
it was stopped about 4 hours after it was started.

We now want to install the MySQL shell utility.  It allows you to execute commands in JavaScript, Python, or SQL without special considerations.

[jonadams@localhost lib]$ sudo yum install mysql-shell
Loaded plugins: langpacks, ulninfo
epel/x86_64/metalink | 11 kB 00:00:00
epel | 5.4 kB 00:00:00
mysql-connectors-community | 2.5 kB 00:00:00
mysql-tools-community | 2.5 kB 00:00:00
mysql80-community | 2.5 kB 00:00:00
ol7_UEKR5 | 2.5 kB 00:00:00
ol7_latest | 2.7 kB 00:00:00
(1/2): epel/x86_64/updateinfo | 1.0 MB 00:00:00
(2/2): epel/x86_64/primary_db | 6.8 MB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package mysql-shell.x86_64 0:8.0.17-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
mysql-shell x86_64 8.0.17-1.el7 mysql-tools-community 15 M

Transaction Summary
Install 1 Package

Total download size: 15 M
Installed size: 50 M
Is this ok [y/d/N]: y
Downloading packages:
mysql-shell-8.0.17-1.el7.x86_64.rpm | 15 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : mysql-shell-8.0.17-1.el7.x86_64 1/1
Verifying : mysql-shell-8.0.17-1.el7.x86_64 1/1

mysql-shell.x86_64 0:8.0.17-1.el7


Verify the MySQL shell installation:

[jonadams@localhost lib]$ mysqlsh
MySQL Shell 8.0.17

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

The default shell interface for the MySQL shell utility is JavaScript.  To switch between the different interfaces, use these commands.  The prompt will change to reflect the current interface.

\js for JavaScript

\sql for SQL

\py for Python

MySQL Shell comes with the option of automatically saving the history of commands, but it is disabled by default. It is a handy feature, so use this process to activate it.

Show settings and look for history.autoSave

MySQL JS > shell.options

MySQL JS > \option --persist history.autoSave=1

You can close and reopen the session and in the new one press the arrow up key to verify that the commands from previous sessions are available.

Switch to SQL mode and connect to the MySQL instance:

MySQL … JS > \sql

MySQL SQL > \c root@localhost
Creating a session to 'root@localhost'
Please provide the password for 'root@localhost': ***************
Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): Y
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 8 (X protocol)
Server version: 8.0.17 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.

Notice that I opted for MySQL shell to save my user account password.  Next time I login, I will not have to type in the password after the connect command.

Now you can submit the usual SQL commands

SQL > show databases;
| Database |
| information_schema |
| mysql |
| performance_schema |
| sys |
4 rows in set (0.0006 sec)

Exit from MySQL Shell with “\q” or “\exit”

MySQL … JS > \q

Now that your test environment is set up, I recommend shutting it down and backing it up with the VirtualBox export tool in case your experimentation goes wrong.  It would also be a good time to review the MySQL 8 online documentation.  I recommend concentrating first on backup and recovery commands.



Creating SQL Server index fragmentation

In a few client environments, I have seen index monitoring and maintenance procedures triggered by fragmentation levels.  I have sometimes wondered how these procedures were tested when they were put in place, other than waiting for fragmentation to occur through normal operations.  Wouldn’t it be a good idea to have a process that generates a level of fragmentation to test these processes?

Here is a process that can be used as the basis for generating index fragmentation to properly test index monitoring and maintenance procedures.  I will create a test table, populate it with data, create an index on the test table, then execute a process that keeps inserting data into the test table until the desired level of index fragmentation is reached.

During testing, 100 rows were inserted into the test table initially, then each iteration of the fragmentation process inserted another 100 rows.  It took only two iterations through the process to produce 50 percent index fragmentation.  For more precise fragmentation, use a larger number of rows during the initial data insert.

NOTE – This process was created and tested on SQL Server 2016.  Be sure to set the database environment where you want to create the test table before executing these commands.

--Drop the test table if it exists
drop TABLE dbo.TestTable

--Create the test table
CREATE TABLE dbo.TestTable
        RowID int IDENTITY(1,1),

--Code to insert first rows into table 

SET @Lower = -730
SET @Upper = -1
SET @RowCount = 0

WHILE @RowCount < 100  --Set this value to the number of rows desired
	SET @RowString = CAST(@RowCount AS VARCHAR(10))
	SELECT @Random = ROUND(((@Upper - @Lower -1) * RAND() + @Lower), 0)
	SET @InsertDate = DATEADD(dd, @Random, GETDATE())

SET @s = (
	c1 AS [text()]
	SELECT TOP (30) c1  --Change this value to the desired length of the resulting string.
      ('A'), ('B'), ('C'), ('D'), ('E'), ('F'), ('G'), ('H'), ('I'), ('J'),
      ('K'), ('L'), ('M'), ('N'), ('O'), ('P'), ('Q'), ('R'), ('S'), ('T'),
      ('U'), ('V'), ('W'), ('X'), ('Y'), ('Z'), ('0'), ('1'), ('2'), ('3'),
      ('4'), ('5'), ('6'), ('7'), ('8'), ('9'), ('!'), ('@'), ('#'), ('$'),
      ('%'), ('&'), ('*'), ('('), (')'), ('?'), ('+'), ('-'), (','), ('.')
        ) AS T1(c1)
	) AS T2

		(REPLICATE('0', 10 - DATALENGTH(@RowString)) + @RowString
		, @InsertDate

	SET @RowCount = @RowCount + 1

--Verify number of row inserted into test table
select count(*)
from TestTable;

--Create index on test table

CREATE NONCLUSTERED INDEX [dbo.TestTableIndex] ON [dbo].[TestTable]
	[MyString] ASC

--Check initial fragmentation
select avg_fragmentation_in_percent
    from sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, 'LIMITED')
    where object_id = (select object_id from sys.tables where name = 'TestTable')
    AND index_type_desc = 'NONCLUSTERED INDEX'

--Routine to loop through row inserts until fragmentation reaches desired level
DECLARE @FragPercent INT

SET @FragEnd = 30      --Set this number to degree of fragmentation desired 
SET @FragPercent = 0  

WHILE @FragEnd > @FragPercent

  select @FragPercent = max(avg_fragmentation_in_percent)
  from sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, 'LIMITED')
  where object_id = (select object_id from sys.tables where name = 'TestTable')
  AND ALLOC_unit_type_desc = 'IN_ROW_DATA';

  insert into [dbo].[TestTable] ( [RowID],[MyKeyField],[MyDate],[MyInt],[MyString])
  select top(100) [RowID],[MyKeyField],[MyDate],[MyInt],[MyString]
  from dbo.TestTable;



SQL Server Migration Tool

I was recently asked to review a migration plan from SQL Server to Oracle. The actual export and import of the table data was broken into two separate tasks and the time projected was (in my opinion) overly long. I did not know the exact details of what steps were being performed during the export and import, but I immediately thought about improving the process by using the SQL Server Import and Export Wizard.

I first encountered the SQL Server Import and Export Wizard utility in the early days of working with SQL Server versions 7 and 2000 and before you ask, yes I am old.  I attended the Microsoft launch event for SQL Server 7 while living in Denver and even got the t-shirt!

Anyway, I was impressed with this tool even this early in its history and the fact that such a capable tool was bundled with SQL Server at no additional charge.  This is a practice that continues today.

The use of this tool is well-documented (, but I will give a brief overview in this post.

The SQL Server Import and Export Wizard can be accessed in a couple of ways.  The first is directly though the Start menu in Windows:

sql server mig tool 1

It is interesting to note that both 32 and 64-bit versions are available in this manner.  Another method of access is directly from the SQL Server Management Studio when right-clicking on a database, going to the tasks sub-menu, and selecting either import data or export data.

Once selected by either of these methods, the interface is the same:

sql server mig tool 2

Going to the next screen, you are prompted for a data source for your process.  The data sources range from .NET, to Access (remember it?), to ODBC data sources for Oracle and SQL Server.

sql server mig tool 3

Take note that Microsoft has a built-in OLE DB provider client for Oracle databases.  However, I have always found it easier to use the provider supplied by Oracle.  It would be necessary to install the Oracle client software on the Windows host to make this option available.  If using Oracle as a source or target for an import or export process, I tend to go through the Windows control panel and set up my connections before using the SQL Server Import and Export Wizard.  I found it easier, especially being familiar with Oracle networking.

The target options for the SQL Server Import and Export Wizard are similar to the source options.From here you get the idea that you can perform the import and export in one process, provided you get the target specifics taken care of first.

I recommend using this tool for your smaller data migration needs, even if you are not using SQL Server.  The drawback is that you will need access to SQL Server software to install this utility.


SQL Server Assessment Script

I finally get to cross an item off my back burner project list.  Last February, I was at a client assessing their entire SQL Server database footprint for migration to the cloud.  Not having done this kind of work before, I quickly cobbled together scripts that gave me a basic overview of each database.  I then combined the results into a spreadsheet.

Wouldn’t it have been nice to get this information in a single script?  You could use this script to not only get a better understanding of resource usage in your SQL Server instance, but also use it to determine needed resources for migrating to either other hardware or the cloud.

I have already produced a lengthy health check script (here) that gives quite a bit of information.  It did not, however, take measurements of CPU usage and IOPS values over a period of time.  My new SQL Server assessment script not only gathers a smaller but more critical set of information than my health check script, but also takes hourly snapshots of CPU usage and IOPS values including individual read and write values over the course of one day.  Execute this script during a normal working day or during a period of end-of-month processing, if it exists.

Here is the script in its latest form.  To adjust the CPU and IOPS snapshots taken and the interval between snapshots, edit these two lines:

while (@cntr) < 24 – This is the number of snapshots to take before results are produced.

waitfor delay ’01:00:00′  –  This is the time delay between snapshots.  It is currently set to one hour.

 --Lists the resources and their usage for a single SQL Server instance
 --Intended to act as a tool for migrations to other hardware or the cloud.

use master
 ----------------------------------------------------------Node name

SELECT SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [CurrentNodeName];
 ----------------------------------------------------------Product Version, level, and Edition

 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '8%' THEN 'SQL2000'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '9%' THEN 'SQL2005'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '10.0%' THEN 'SQL2008'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '10.5%' THEN 'SQL2008 R2'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '11%' THEN 'SQL2012'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '12%' THEN 'SQL2014'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '13%' THEN 'SQL2016'
 WHEN CONVERT(VARCHAR(128), SERVERPROPERTY ('productversion')) like '14%' THEN 'SQL2017'
 ELSE 'unknown'
 END AS MajorVersion,
 SERVERPROPERTY('ProductLevel') AS ProductLevel,
 SERVERPROPERTY('Edition') AS Edition,
 SERVERPROPERTY('ProductVersion') AS ProductVersion

----------------------------------------------------------License information

print N'License information';

 ----------------------------------------------------------Database listing

print N'Database list with Status and Recovery Model';
 ----------------------------------------------------------Logical CPUs
 select cpu_count as logical_cpu_count
 from sys.dm_os_sys_info;
 ----------------------------------------------------------Physical Memory

--For SQL Server version 2008, 2008R2
 select physical_memory_in_bytes/1024/1024/1024 as GB
 from sys.dm_os_sys_info;

--For SQL Server version > 2008
 select physical_memory_kb/1024/1024 as GB
 from sys.dm_os_sys_info;

----------------------------------------------------------Memory usage per database

print N'Memory Usage per User Database';
 substring(DB_NAME(database_id),1,40) AS [Database Name]
 ,COUNT(*) * 8/1024.0 AS [Cached Size (MB)]
 database_id > 4
 AND database_id <> 32767
 AND db_name(database_id) <> 'SSISDB'
 GROUP BY DB_NAME(database_id)

----------------------------------------------------------Database file size and growth settings

print N'Size and Growth';
 select substring(,1,40) AS DB_Name, substring(,1,40) AS Logical_name,
 substring(a.filename,1,100) AS File_Name,
 cast((a.size * 8.00) / 1024 as numeric(12,2)) as DB_Size_in_MB,
 case when a.growth > 100 then 'In MB' else 'In Percentage' end File_Growth,
 cast(case when a.growth > 100 then (a.growth * 8.00) / 1024
 else (((a.size * a.growth) / 100) * 8.00) / 1024
 end as numeric(12,2)) File_Growth_Size_in_MB,
 case when ( maxsize = -1 or maxsize=268435456 ) then 'AutoGrowth Not Restricted' else 'AutoGrowth Restricted' end AutoGrowth_Status
 from sysaltfiles a
 join sysdatabases b on a.dbid = b.dbid
 where DATABASEPROPERTYEX(, 'status') = 'ONLINE'
 order by;

----------------------------------------------------------Database data and log file total size
 SUM(CASE WHEN type = 0 THEN MF.size * 8 / 1024.0 /1024.0 ELSE 0 END) +
 SUM(CASE WHEN type = 1 THEN MF.size * 8 / 1024.0 /1024.0 ELSE 0 END) AS Total_GB
 sys.master_files MF
 JOIN sys.databases DB ON DB.database_id = MF.database_id
 WHERE DB.source_database_id is null;
 ----------------------------------------------------------License feature usage

IF OBJECT_ID('tempdb.dbo.##enterprise_features') IS NOT NULL
 DROP TABLE ##enterprise_features

CREATE TABLE ##enterprise_features
 dbname SYSNAME,
 feature_name VARCHAR(100),
 feature_id INT

EXEC sp_msforeachdb
 N' USE [?]
 IF (SELECT COUNT(*) FROM sys.dm_db_persisted_sku_features) >0
 INSERT INTO ##enterprise_features
 SELECT dbname=DB_NAME(),feature_name,feature_id
 FROM sys.dm_db_persisted_sku_features
 END '
 FROM ##enterprise_features;

----------------------------------------------------------CPU and IOPS for 24 hours (avg. work day or end of month)

create table #CPU (snap_time varchar(30),
 row_num int,
 db_name varchar(50),
 cpu_ms int,
 cpu_pct int)

create table #IOPS (snap_time varchar(30),
 db_id int,
 db_name varchar(50),
 reads int,
 writes int,
 total_io int)

declare @cntr int
 set @cntr = 0
 while (@cntr) < 24 --set number of 1-hour loops to perform here.

 SELECT convert(varchar, getdate(), 120) AS ts,
 DatabaseID, DB_Name(DatabaseID) AS [DatabaseName],
 SUM(total_worker_time) AS [CPU_Time_Ms]
 FROM sys.dm_exec_query_stats AS qs
 SELECT CONVERT(int, value) AS [DatabaseID]
 FROM sys.dm_exec_plan_attributes(qs.plan_handle)
 WHERE attribute = N'dbid') AS F_DB
 GROUP BY DatabaseID
 insert into #CPU (row_num,snap_time,db_name,cpu_ms,cpu_pct)
 ts, DatabaseName,
 CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]
 --WHERE DatabaseID > 4 -- system databases
 --AND DatabaseID <> 32767 -- ResourceDB

 convert(varchar, getdate(), 120) AS ts,
 database_id,DB_NAME(database_id) AS Name,
 SUM(num_of_reads) AS IO_Reads,
 SUM(num_of_writes) AS IO_Writes,
 SUM(num_of_reads + num_of_writes) AS Total_IO
 FROM sys.dm_io_virtual_file_stats(NULL, NULL)
 GROUP BY database_id,DB_NAME(database_id))
 insert into #IOPS (snap_time,db_id,db_name,reads,writes,total_io)
 D.ts, D.database_id, D.Name,
 SUM(D.IO_Reads) - SUM(L.num_of_reads) AS Data_Reads,
 SUM(D.IO_Writes) - SUM(L.num_of_writes) AS Data_Writes,
 SUM(D.IO_Reads+D.IO_Writes) - SUM(L.num_of_reads+L.num_of_writes) AS Total_IO
 JOIN sys.dm_io_virtual_file_stats(NULL, 2) L
 ON D.database_id = L.database_id
 GROUP BY D.ts,D.database_id, D.Name
 ORDER BY D.database_id;

waitfor delay '01:00:00' --change this to determine the interval between snapshots
 set @cntr = @cntr + 1


select db_name, avg(cpu_pct) as AVG_CPU
 from #CPU
 where db_name is not null
 group by db_name
 order by db_name;

select DB_NAME, snap_time, MAX(cpu_pct) as MAX_CPU
 from #CPU
 group by db_name, snap_time
 having MAX(cpu_pct) > 0
 order by MAX(cpu_pct) desc

select db_name, min(total_io) AS MIN_IO, max(total_io) AS MAX_IO, avg(total_io) AS AVG_IO
 from #IOPS
 group by db_name
 order by db_name;

select snap_time, db_name, reads, writes, total_io
 from #IOPS
 where total_io = (select max(total_io) from #IOPS);

drop table #CPU;

drop table #IOPS;