Oracle Multitenant – Part 2 – Creating, connecting to, removing, and querying CDBs and PDBs

Now that we have an understanding of the multitenant architecture from part 1, we want to add some practical experience starting with the creation of CDBs and PDBs.  My test environment will be Oracle VirtualBox v.5.1.22 guest running an Oracle Enterprise Linux 7.3 guest with Oracle version 12.2.0.1.0 software installed.

Creating CDBs and PDBs

NOTE – If you create a CDB under Oracle 12c release 1, the following is not supported:

  • Flashback Transaction Query
  • Flashback Transaction Back out
  • Database Change Notification
  • Continuous Query Notification
  • Client Side Cache
  • Heat map
  • Automatic Data Optimization
  • Oracle Streams
  • Oracle Fail Safe release 4.1.0 that ships with Oracle Database 12.1.0.2.

The preferred CDB creation method is through the Database Configuration Assistant (DBCA).  The DBCA wizard will guide you through the process and automatically execute the proper catalog scripts once the CDB is created.  The DBCA can also be used to generate all the scripts you need to create the CDB manually based on choices made through the DBCA create database wizard.  The DBCA can also generate a CDB creation template that you can use to run DBCA in other environments if you copy the template over.

Once you login to the OS as the Oracle software owner and set your environment, execute $ORACLE_HOME/bin/dbca

create_cdb1

The obligatory Oracle splash screen appears

create_cdb2

Choose the ‘create database’ option and proceed to the next screen

create_cdb3

We are going to choose ‘typical configuration’ here to save time.  On this screen, we can select the storage type, DB file location, character set, and designate the administrative (SYS and SYSTEM) passwords.  More importantly, we have the option to create the new database as a CDB or non-CDB database.  Since CDB is our goal here, we check the CDB box and designate the name of our inclusive pluggable database, then proceed.

create_cdb4

The summary screen appears to confirm our choices.

create_cdb5

If you like, you can watch the progress bar and the progress of the individual steps as the database is being built

create_cdb6

Acknowledge the finish screen when the process completes to exit the DBCA

create_cdb7

What if we had selected the ‘advanced configuration’ option earlier?  All the options under Typical configuration are automatically disabled.

create_cdb_adv1

Now we need to select the type of database (single-instance, RAC, or RAC one-node) and the characteristics (custom, general purpose, or data warehouse).  We’ll select single-instance and general purpose, then proceed.

create_cdb_adv2

There are some interesting choices on the next screen.  After providing a global database name and SID, you can choose a CDB or non-CDB database.  If you choose a CDB, you can opt for an empty CDB with no PDBs (except the PDB$SEED) or you can choose to create one or more PDBs at the same time.  Another option that is newly available in 12c release 2 is the option to create a separate UNDO tablespace for each PDB.

create_cdb_adv3

Storage options come next

create_cdb_adv4

Recovery options on this screen.  I chose to create a flash recovery area with the recommended size and enable archiving.

create_cdb_adv5

Listener selection is next.  There are no existing database listeners in the system, so I opt to create one with a default name and port number.  It is recommended to avoid using the default 1521 port for security reasons, but since this is a closed system, there should be no concerns.

create_cdb_adv6

At the next screen, I chose not to install Database Vault or Label Security

create_cdb_adv7

The next screen is multi-tabbed with options for memory allocation, sizing of processes, selection of character sets,  connection mode, and installation of sample schemas.  I choose the defaults in each section.

create_cdb_adv8

I opted to configure Enterprise Manager database express.  We may want to view our CDB options for backup and recovery later in a separate post.

create_cdb_adv9

Specify the administrative user credentials at the next screen

create_cdb_adv10

This screen prompts you with options to create the database, create a template of your choices for additional CDBs of this type, or the generation of scripts for manual database creation.  I opt for all three.  Why not?

create_cdb_adv11

Confirm your choices on the summary screen

create_cdb_adv12

If you like, you can watch the progress bar and the progress of the individual steps as the database is being built

create_cdb_adv13

Acknowledge the finish screen when the process completes to exit the DBCA

create_cdb_adv14

Creating a PDB

If you have a CDB and want to create a PDB or add a PDB, the DBCA is also the preferred method.

To create a new PDB, start DBCA. Once you get past the 12c splash screen, select ‘Manage pluggable databases’ on the first screen, then proceed.

create_pdb1

This screen contains all the PDB options you have in your current configuration such as create, delete, unplug, or configure.  The delete, unplug, and configure options will only by available if there is an existing PDB.  Select ‘create a pluggable database’, then proceed.

create_pdb2

Designate the CDB that will host your new PDB and give the admin account credentials for the designated CDB, then proceed.

create_pdb3

Here you can designate whether to create a new PDB from the ‘seed’ or create a new PDB from an existing PDB.  We are going to use the seed since we don’t have an existing PDB.

create_pdb4

Give a name to the new PDB and the credentials of the new administrative account for the PDB.  It is interesting to note here that the wizard is not looking for a typical SYS or SYSTEM account since these accounts already exist at the CDB level.

create_pdb5

Storage options are next as well as opting to create a default user tablespace.  The default user tablespace will be called USERS if you choose this option.  it is interesting to note here that even though the option is not given, the new PDB will have its own UNDO tablespace by default.

create_pdb6

Confirm your choices on the summary screen, then proceed.

create_pdb7

Watch the progress of the PDB creation.

create_pdb8

Acknowledge the finish screen when the process completes to exit the DBCA

create_pdb9

Other methods to create a PDB include:

  • Using the ‘create pluggable database’ command to create the PDB while connected to the CDB.
  • Plug an existing PDB into the CDB.
  • Using Oracle Cloud Control to guide you through the creation of the PDB.
  • Cloning an existing PDB to a new PDB with, or without data.
  • Creating an empty PDB and using Oracle Data Pump to import data from one database source into a PDB.

Starting and stopping the CDB

Starting and stopping a CDB is just like starting and stopping a non-CDB database.  Use the SQL*Plus utility (or Oracle Enterprise Manager) and issue the startup and shutdown commands as usual.  All of the options still exist such as startup force, shutdown abort and shutdown immediate.

Starting and stopping the PDB

First, determine the state of the PDB before issuing start or stop commands by connecting to the CDB with an account having SYSDBA, SYSOPER, SYSBACKUP or SYSDG privileges:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
 --------------------       ------------------
 PDB$SEED      READ ONLY
 JON12PDB1      MOUNTED

The PDB$SEED database is ALWAYS read only.  As the template database, it needs to maintain its integrity.  Since the PDB is already mounted, you just need to open it:

SQL> alter pluggable database JON12PDB1 open;

Pluggable database altered.

You can execute the commands directly without the ‘pluggable database’ syntax by changing to the PDB within the session first:

SQL> alter session set container=JON12PDB1;

Session altered.

SQL> shutdown
 Pluggable Database closed.

I would not recommend this method, however, since it would be easy to forget what container you are pointing to and issue a shutdown mistakenly against the CDB could make you think about updating your resume.

Other methods:

  • Use the ‘alter pluggable database’ command to start and stop the PDB when connected to the CDB with an account having SYSDBA, SYSOPER, SYSBACKUP or SYSDG privileges.
  • Using the SQL*Plus ‘startup pluggable database’ and ‘shutdown pluggable database’ commands when connected to the CDB with an account having SYSDBA, SYSOPER, SYSBACKUP or SYSDG privileges.

PDB startup options (make sure you are connected to the PDB using the ‘alter session set container’ command while in the CDB in the example above):

Open (default) – Open the PDB normally (startup pluggable database JON12PDB1 😉

Force – Force the PDB open (startup pluggable database JON12PDB1 force;)

Upgrade – Open the PDB in upgrade mode (startup pluggable database JON12PDB1 upgrade;)

Restricted – Open the PDB in restricted mode (startup pluggable database JON12PDB1 restricted;)

Read only – Open the PDB in read only mode (startup pluggable database JON12PDB1 open read only;)

For those who don’t want to have to execute separate startup commands for the PDBs after starting the CDB, there is a means by which you can set the default state per PDB after the CDB is started.

For example, after starting the CDB, set the state of your desired PDB, albeit OPEN (read/write) or OPEN READ ONLY, execute the SAVE STATE command from the CDB:

alter pluggable database <PDB_NAME> save state;

Now when the CDB starts, the respective PDB will automatically go to the saved state.  You can view the saved state of each PDB in the DBA_PDB_SAVED_STATES view.  Please note that the saved state feature for PDBs is only on Oracle software versions 12.1.0.2 or higher.

To clear the saved state of the PDB, issue the DISCARD STATE command:

alter pluggable database <PDB_NAME> discard state;

This command will set the state of the particular PDB back to the MOUNTED default.  Changing the saved state of the desired PDB is a simple matter of placing the PDB in the desired state and issuing the SAVE STATE command again.

PDB open options:

Open (default) – Open the PDB in read write mode (alter pluggable database JON12PDB1 open;)

Open read only – Open the PDB in read only mode (alter pluggable database JON12PDB1 open read only;)

Open resetlogs – Open the PDB after a point-in-time-recovery (PITR) (alter pluggable database JON12PDB1 open resetlogs;)

Open restricted – Open the PDB in restricted mode (alter pluggable database JON12PDB1 open restricted;)

Force – Force shutdown and restart (alter pluggable database JON12PDB1 open force;)

Upgrade  – Open in upgrade mode (alter pluggable database JON12PDB1 open upgrade;)

PDB shutdown options (make sure you are connected to the PDB using the ‘alter session set container’ command while in the CDB in the example above):

Normal (default) – Close the PDB normally, waiting for transactions to end (shutdown)

Shutdown Immediate – Close the PDB. Rollback any pending transaction. Flush the SGA (shutdown immediate)

Abort – Shutdown the PDB without delay. Dirty buffers may be written to disk (shutdown abort)

Transactional – Shutdown the PDB after all active transactions have completed (shutdown transactional)

PDB close options:

Close (default) – Close the PDB normally (alter pluggable database JON12PDB1 close;)

Immediate – Force the PDB closed (alter pluggable database JON12PDB1 close immediate;)

 

Connecting to the CDB/PDB

Accessing the CDB is the same as accessing a non-CDB database and we have already discussed accessing the PDB while connected to the CDB, but what about accessing the PDB directly from outside the CDB?  As I mentioned in my last post, listener services are automatically created for the CDB and each PDB on creation.  Here you can see the JON12CDB and JON12PDB1 services when checking the listener status.

listener_status

Make sure the CDB is started and the PDB is open.  You can query the v$pdbs view from the CDB to see the status of all the enclosed PDBs.  Once you are sure the PDB is open, you can add an entry to your tnsnames.ora file to access the PDB directly (The entry for the CDB should already be present):

JON12PDB1 =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = JON12PDB1)
 )
 )

You should then be able to access your PDB directly:

pdb_remote_connection

Dropping a CDB and PDB

Dropping or removing a CDB is the same as dropping a non-CDB database.  The only difference in the case of dropping a CDB is that all PDB databases beneath it will be removed as well.

Dropping or removing a PDB is a bit more complicated when performing the task manually, but not by much.  When connected to the CDB that contains the PDB, use the ‘alter pluggable database’ command to first close the PDB, then issue the ‘drop pluggable database <PDB_name> including datafiles;’ to complete the process.  Using the DBCA to drop a PDB is even easier as it performs the closing of the PDB automatically, if necessary.

Other PDB removal options:

  • The SQL prompt, using the drop pluggable database command.
  • Using Oracle Enterprise Manager Cloud Control to remove the PDB
  • Unplugging the PDB
  • Deleting the entire CDB.

 

Querying the CDB and PDB

Once you are connected to the CDB or PDB, queries function the same as they have in non-CDB databases.  What is more important in this stage of our multitenant experience is knowing about multitenant-specific commands and objects.  For instance, when you are connected to a CDB, you can use the command ‘show pdbs’ to get all associated PDBs listed with their current state:

SQL> show pdbs

CON_ID CON_NAME     OPEN MODE  RESTRICTED
 ---------- ------------------------------ ---------- ----------
 2 PDB$SEED     READ ONLY  NO
 3 JON12PDB1     READ WRITE NO

Remember the PDB$SEED PDB is ALWAYS open in read only mode.

The container ID (CON_ID) is an important identifier in the multitenant environment.  The ‘show con_id’ command displays the CON_ID of the current CDB or PDB you are in.  Notice the value changes when I switch from the CDB to the PDB with the ‘alter session set container’ command:

SQL> show con_id

CON_ID
 ------------------------------
 1
 SQL> alter session set container=jon12pdb1;

Session altered.

SQL> show con_id

CON_ID
 ------------------------------
 3

NOTE – Once you alter your session to switch from the CDB to a PDB, you cannot switch back to the CDB and must exit your session and open a new one or open a different session to access the CDB once more.

The ‘show con_name’ command is similar to the ‘show con_id’ except that the name and not the ID is displayed:

SQL> show con_name

CON_NAME
 ------------------------------
 CDB$ROOT

SQL> alter session set container=jon12pdb1;

Session altered.

SQL> show con_name

CON_NAME
 ------------------------------
 JON12PDB1

 

If you need a list of the PDB IDs and names, query the v$pdbs view:
SQL> select con_id, name
2  from v$pdbs;

CON_ID NAME
 ---------- ----------
 3 JON12PDB1

In some views, the CON_ID column has been added for multitenant identification as in the case of v$tablespace:

select a.con_id, a.name, b.name
 from v$tablespace a, v$pdbs b
 where a.con_id=b.con_id order by 1;

CON_ID NAME       NAME
 ---------- ---------- ----------
 2 TEMP       PDB$SEED
 2 UNDOTBS1   PDB$SEED
 2 SYSAUX     PDB$SEED
 2 SYSTEM     PDB$SEED
 3 TEMP       JON12PDB1
 3 UNDOTBS1   JON12PDB1
 3 SYSAUX     JON12PDB1
 3 USERS      JON12PDB1
 3 SYSTEM     JON12PDB1

NOTE – This is a listing from the CDB.  If you are connected to a PDB, you will only see the tablespaces for that particular PDB.

In other views, some filtering has been enabled, such as dba_tables.  In this case, you will see only tables associated with the particular CDB or PDB you are connected to:

SQL> select count(*)
 2  from dba_tables;

COUNT(*)
 ----------
 2106

SQL> alter session set container=jon12pdb1;

Session altered.

SQL> select count(*)
 2  from dba_tables;

COUNT(*)
 ----------
 2105

The difference in the counts here is that when you are connected to the PDB, you cannot see the CDB-specific objects.  The CDB_TABLES view allows you to identify tables as belonging to a CDB or PDB:

select a.con_id, b.name, count(*)
 from cdb_tables a, v$containers b
 where a.con_id=b.con_id
 group by a.con_id, b.name;

CON_ID NAME  COUNT(*)
 ---------- ---------- ----------
 1 CDB$ROOT     2106
 3 JON12PDB1     2105

This view lets you see the CDB and enclosed PDB tables when connected to the CDB.  However, it does not work the same when connected to the PDB:

SQL> alter session set container=jon12pdb1;

Session altered.

SQL> select a.con_id, b.name, count(*)
 from cdb_tables a, v$containers b
 where a.con_id=b.con_id
 group by a.con_id, b.name;
 CON_ID NAME  COUNT(*)
 ---------- ---------- ----------
 3 JON12PDB1     2105

 

Now that you are on the PDB, the view only lets you see tables associated with that particular PDB.

Here are some other admin views available:

V$CONTAINERS – Contains information on the containers in the database including the ROOT and all PDBs.

V$PDB_INCARNATION – Provides information on incarnations specific to given PDBs.

V$PDBS – Provides information on PDBs within the database

DBA_CONTAINER_DATA – Displays various container data for container data objects.

DBA_HIST_PDB_INSTANCE – Contains workload repository records for PDBs

DBA_PDB_HISTORY – Contains the history of a PDB. Only visible from within a specific PDB.

DBA_PDB_SAVED_STATES – Contains the saved state of a PDB.

DBA_PDBS Contains a list of the PDBs in the database. Only visible from the ROOT container.

PDB_ALERTS Provides information on specific PDB alerts.

PDB_PLUG_IN_VIOLATIONS Provides information on incompatibilities between the CDB and the PDB.

 

We’ve covered quite a bit of territory in this post.  Next up, cloning and migrating PDBs

 

Oracle Multitenant – Part 1 – What is multitenant?

The Oracle database multitenant architecture was introduced in version 12c release 1. This architecture consists of a container database (CDB) that contains one or more pluggable databases (PDB). The CDB acts as a metadata repository for the PDBs and a source of user accounts that are utilized across the PDBs called common users. The CDB and PDB share resources, but not data. A PDB template or “seed” database allows the quick deployment of PDBs out of the CDB. This architecture was created with consolidation in mind and according to the Oracle 12c release 1 documentation, will become the default database architecture as the non-CDB architecture is deprecated (https://docs.oracle.com/database/121/UPGRD/deprecated.htm#BABDBCJI).

My current work has been in non-CDB environments, but given that non-CDB databases are being deprecated, it is in my interest to start working with CDB databases. I plan to share this work with you in a series of posts for (hopefully) mutual benefit.

The multitenant architecture is a separately licensed option from Oracle, but there is an exception. If the CDB contains a single PDB, then no additional licensing is required.   This way, you can start converting non-CDB databases to CDB without having to accrue additional licensing cost.

The PDBs can be ‘plugged’ and ‘unplugged’ between container databases within limits. Access to the PDBs within a CDB is handled via SQL*Net. Whenever a PDB is created, a service of the same name is also created to handle the PDB connections and the service is automatically registered with the database listener.   It is recommended to change the name of this service for security reasons.

While the CDB and PDB share resources such as memory and processes, each has their own SYS and SYSAUX tablespace with associated objects.   There is one common UNDO tablespace at the CDB level (one per instance for RAC) to be shared, but each PDB has its own TEMP tablespace.

There is a laundry list of features in the Oracle Documentation that this new architecture provides involving consolidation:

  • Cost reduction: By consolidating hardware and sharing database memory and files, you reduce costs for hardware, storage, availability, and labor. For example, 100 PDBs on a single server share one database instance and one set of database files, thereby requiring less hardware and fewer personnel.
  • Easier and more rapid movement of data and code: By design, you can quickly plug a PDB into a CDB, unplug the PDB from the CDB, and then plug this PDB into a different CDB. The implementation technique for plugging and unplugging is similar to the transportable tablespace technique.
  • Easier management and monitoring of the physical database: The CDB administrator can attend to one physical database (one set of files and one set of database instances) rather than split attention among dozens or hundreds of non-CDBs. Backup strategies and disaster recovery are simplified.
  • Separation of data and code: Although consolidated into a single physical database, PDBs mimic the behavior of non-CDBs. For example, if user error loses critical data, a PDB administrator can use Oracle Flashback or point-in-time recovery to retrieve the lost data without affecting other PDBs.
  • Secure separation of administrative duties: A user account is common, which means that it can connect to any container on which it has privileges, or local, which means that it is restricted to a specific PDB. A CDB administrator can use a common user account to manage the CDB. A PDB administrator uses a local account to manage an individual PDB. Because a privilege is contained within the container in which it is granted, a local user on one PDB does not have privileges on other PDBs within the same CDB.
  • Ease of performance tuning: It is easier to collect performance metrics for a single database than for multiple databases. It is easier to size one SGA than 100 SGAs.
  • Support for Oracle Database Resource Manager: In a multitenant environment, one concern is contention for system resources among the PDBs running on the same computer. Another concern is limiting resource usage for more consistent, predictable performance. To address such resource contention, usage, and monitoring issues, you can use Oracle Database Resource Manager.
  • Fewer database patches and upgrades: It is easier to apply a patch to one database than to 100 databases, and to upgrade one database than to upgrade 100 databases.

The previous benefits were from the first release of CDB in Oracle version 12c release 1. Oracle 12c release 2 added the following features, although this is not a complete list:

  • CDBs can now support “thousands” of PDBs instead of the previous maximum of 252.
  • A PDB can now have a different character set than the CDB
  • Point-in-time copies of PDBs possible for development and testing purposes
  • PDB flashback now possible by setting up restore points
  • PDBs can have their own UNDO tablespaces
  • Upgrade a CDB with one or more PDBs in a single operation

 

Next up in this series – Creating, connecting to, removing, and querying CDBs and PDBs

 

Creating Corruption for Recovery Practice

Backup and recovery should be the first item on any DBA’s list when administering a new environment.  Often the emphasis is placed on backups and that is only half of creating a successful backup and recovery solution for a production environment.  Making sure recovery is possible from a situation such as block corruption using the backups is the other and arguably the more important half.

As newer versions of Oracle more availability features and higher reliability, block corruption rarely happens. New recovery features, like table recovery in version 12c are also introduced over time.  This is a good thing, but the problem is practicing recovery methods when recovery is not necessary.  The only option most DBA’s have is recovering production database backups into a non-production  environment.  While this may be enough of a backup validation process for most, it leaves the question of knowing how to handle a situation where less than a full recovery of production data is necessary.  Documenting proven recovery steps for full as well as partial recoveries will help during times of crisis when tension is high.

Using a virtual environment, I will demonstrate how to manually corrupt a database so that recovery becomes necessary.  The actual steps to recover the database once it is damaged will not be covered in this post as it has already been documented extensively in both the Oracle documentation and other DBA posts.

My virtual environment will consist of a single-instance non-ASM non-container database version 12.1.0.2.0 on a Oracle Enterprise Linux 7u3 platform.  I will mention additional steps to perform this process on an ASM or Windows-hosted environment as well.  In this Linux environment, we will use the utility dd to perform the actual corruption.

After ensuring my database was in archivelog mode, I created my test objects:

SQL> create tablespace jontest_tsp datafile '/u01/app/oracle/oradata/JON12C/datafile/jontest_tsp1.dbf'
size 100M autoextend on next 1M extent management local segment space management auto;
  2  
Tablespace created.

SQL> create user jontest identified by semperf1 default tablespace jontest_tsp;

User created.

SQL> grant connect, resource to jontest;

Grant succeeded.

SQL> alter user jontest quota unlimited on jontest_tsp;

User altered.

SQL> conn jontest/semperf1
Connected.


SQL> create table table_dbc as select dbms_random.value(0,100) N1 from dual connect by level <= 1000;

Table created.

 

SQL> column segment_name format a10;

SQL> select segment_name, tablespace_name from user_segments where 
segment_name='TABLE_DBC';
  2  
SEGMENT_NA TABLESPACE_NAME
---------- ------------------------------
TABLE_DBC  JONTEST_TSP

Now that we have our test objects in place, it is time to get a backup of our environment.

RMAN> backup database plus archivelog delete input;

Starting backup at 07-JUN-17
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=17 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4 RECID=1 STAMP=946036509
input archived log thread=1 sequence=5 RECID=2 STAMP=946040464
channel ORA_DISK_1: starting piece 1 at 07-JUN-17
channel ORA_DISK_1: finished piece 1 at 07-JUN-17
piece handle=/u01/app/oracle/fast_recovery_area/JON12C/backupset/2017_06_07/o1_mf_annnn_FULL_20170607_dmjhv16k_.bkp tag=FULL_20170607 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/JON12C/archivelog/2017_06_07/o1_mf_1_4_dmjczcsb_.arc RECID=1 STAMP=946036509
archived log file name=/u01/app/oracle/fast_recovery_area/JON12C/archivelog/2017_06_07/o1_mf_1_5_dmjhv02b_.arc RECID=2 STAMP=946040464
Finished backup at 07-JUN-17

Starting backup at 07-JUN-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/JON12C/datafile/o1_mf_system_dmj83mqt_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/JON12C/datafile/o1_mf_sysaux_dmj82jkj_.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/JON12C/datafile/jontest_tsp1.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/JON12C/datafile/o1_mf_undotbs1_dmj851z7_.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/JON12C/datafile/o1_mf_users_dmj850wr_.dbf
channel ORA_DISK_1: starting piece 1 at 07-JUN-17
channel ORA_DISK_1: finished piece 1 at 07-JUN-17
piece handle=/u01/app/oracle/fast_recovery_area/JON12C/backupset/2017_06_07/o1_mf_nnndf_TAG20170607T130106_dmjhv391_.bkp tag=TAG20170607T130106 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 07-JUN-17
channel ORA_DISK_1: finished piece 1 at 07-JUN-17
piece handle=/u01/app/oracle/fast_recovery_area/JON12C/backupset/2017_06_07/o1_mf_ncsnf_TAG20170607T130106_dmjhxrz3_.bkp tag=TAG20170607T130106 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 07-JUN-17

Starting backup at 07-JUN-17
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=3 STAMP=946040554
channel ORA_DISK_1: starting piece 1 at 07-JUN-17
channel ORA_DISK_1: finished piece 1 at 07-JUN-17
piece handle=/u01/app/oracle/fast_recovery_area/JON12C/backupset/2017_06_07/o1_mf_annnn_FULL_20170607_dmjhxt9k_.bkp tag=FULL_20170607 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/JON12C/archivelog/2017_06_07/o1_mf_1_6_dmjhxt47_.arc RECID=3 STAMP=946040554
Finished backup at 07-JUN-17

RMAN>

If your particular environment is using ASM for datafile storage instead of the filesystem like my environment, follow these steps to copy the datafile to the OS where you can apply the corruption process.  Make note of the backup tag identifier.

alter database datafile '+DATA/RAC/DATAFILE/jontest_tsp1.df' offline;

rman target=/

RMAN> BACKUP AS COPY DATAFILE '+DATA/RAC/DATAFILE/jontest_tsp1.df' FORMAT '/tmp/jontest_tsp_%f';

After the datafile is out of ASM or the datafile is in a filesystem, check the header block as a SYSDBA user:

SQL> select header_file, header_block from dba_segments where segment_name='TABLE_DBC';

HEADER_FILE HEADER_BLOCK
----------- ------------
   5      130

The header of segment (table) is block 130, so if we damage the next block with dd, we will cause a datafile/table corruption.

[oracle@localhost ~]$ dd of=/u01/app/oracle/oradata/JON12C/datafile/jontest_tsp1.dbf bs=8192 conv=notrunc seek=131 <<EOF
> #corrupt_this#
> EOF
0+1 records in
0+1 records out
15 bytes (15 B) copied, 8.3023e-05 s, 181 kB/s

Execute DBVERIFY to confirm that we have generated some corruption:

[oracle@localhost ~]$ dbv file=/u01/app/oracle/oradata/JON12C/datafile/jontest_tsp1.dbf blocksize=8192

DBVERIFY: Release 12.1.0.2.0 - Production on Wed Jun 7 14:08:25 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - Verification starting : FILE = /u01/app/oracle/oradata/JON12C/datafile/jontest_tsp1.dbf
Page 131 is marked corrupt
Corrupt block relative dba: 0x01400083 (file 5, block 131)
Bad header found during dbv: 
Data in bad block:
 type: 35 format: 3 rdba: 0x74707572
 last change scn: 0x2373.6968745f seq: 0xa flg: 0x04
 spare1: 0x6f spare2: 0x72 spare3: 0x0
 consistency value in tail: 0x8ca90602
 check value in block header: 0xb1ea
 computed block checksum: 0x176

As an interesting side note, even though DBVERIFY noticed the corruption, if we queried v$database_block_corruption, we would not see any entries:

SQL> select * from v$database_block_corruption;

no rows selected

If we immediately query the affected table, the database will see the corruption:

SQL> select count(*)
  2  from table_dbc;
select count(*)
       *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 5, block # 131)
ORA-01110: data file 5: '/u01/app/oracle/oradata/JON12C/datafile/jontest_tsp1.dbf'

The v$database_block_corruption view now has the appropriate entries

SQL> select * from v$database_block_corruption;

     FILE#     BLOCK#   BLOCKS CORRUPTION_CHANGE# CORRUPTIO   CON_ID
---------- ---------- ---------- ------------------ --------- ----------
  5   131        1    0 CORRUPT        0

RMAN will also populate this view if it encounters corrupt blocks.  This is why it is a good practice to either review RMAN backup logfiles or query the v$database_block_corruption view at the end of a backup.

If this was an ASM datafile, restore it to the original location:

rman target /

RUN
{
SET MAXCORRUPT FOR DATAFILE '+DATA/RAC/DATAFILE/jontest_tsp1.df' TO 3;
RESTORE DATAFILE '+DATA/RAC/DATAFILE/jontest_tsp1.df' FROM TAG
'TAG20120303T111330';
RECOVER DATAFILE '+DATA/RAC/DATAFILE/jontest_tsp1.df';
}

After restore and recover the datafile must be brought online

alter database datafile '+DATA/RAC/DATAFILE/jontest_tsp1.df' online;

Now you can practice a datafile, table, or block recovery.

If we wanted to corrupt a datafile in a Windows environment, we could use an editor such as UltraEdit GUI editor, which is also available in a Linux version.  Using this editor, we can either modify the header of the file to corrupt the datafile, or scroll to the table data to corrupt just the table to practice table or block recovery.

I know this goes without saying, but just like the warning labels on hair dryers that tell you not to use them in the bathtub, YOU SHOULD NOT PERFORM THIS PROCESS AGAINST A PRODUCTION DATABASE, EVEN IF YOU CREATE A SEPARATE TABLESPACE THAT CONTAINS NO PRODUCTION DATA.  You know the reason for most warning labels is that someone probably did the very thing the label warns against doing.

 

Get in the Penalty Box!

Sometimes, there exist database users with no regard for what type of queries they run or how long they run for.  These users typically kicks off a query and then goes for coffee, lunch, or leaves for the day.  Their query sits out there spinning and consuming resources that you know could be better used elsewhere, especially if end of the month aggregation processes or reports are being executed.

Maybe you have talked to them before about this behavior or even helped them tune a query or two, but they don’t seem to get the message.  In extreme cases, these users even terminate sessions when the query does not come back in a timely manner only to reconnect and retry it in a new session, leaving the old one (possibly) still executing.

It’s time to let these users know that their behavior is not acceptable in an environment where resources are finite and they need to share with other higher priority processes.  They need to spend some time in the penalty box.  For those unfamiliar with this term, it is a term used in ice hockey where misbehaving players have to sit out the game in a segregated area for a certain length of time, depending on the seriousness of their misdeeds.  The team with the misbehaving player is not allowed to send in a substitute player during the penalty time, so it leaves them one player short and vulnerable to be scored against.

The Oracle Resource Manager is the keeper of the database penalty box.  It has the ability to enforce limits on resources such CPU and I/O consumption, as well as execution time limits.  We are going to use this last feature to create our own penalty box.  I realize that most database managers don’t want to put constraints on users, especially developers, but in all honesty, if they are writing queries that take an hour or more to execute on a regular basis, then the managers need to put their foot down and the users or developers may need a remedial SQL tuning course.

A common misconception about resource plans is that they will apply to all users in the database.  This is not true and I will demonstrate how we can apply a resource plan to only the particular users that we determine are misbehaving.

Here is a simple process to create a resource plan that will limit execution time to 60 seconds.  This limit is just for testing purposes as even I have been known to write queries that run for over a minute 😉

Check for a currently active resource plan:

SQL> show parameter resource_manager_plan

NAME                     TYPE     VALUE

------------------------------------ ----------- ------------------------------

resource_manager_plan   string

There is no currently active plan.  In the event that there is an active resource plan already in place, we can ignore the step to create a resource plan below and continue with the remaining steps.

First, clear the existing pending area.  The pending area is where our resource plan work will be saved until it is validated and submitted.  If you get stuck or make a mistake, simply run this command to clear the pending area and start again

begin
 dbms_resource_manager.clear_pending_area();
end;
/

Create a new pending area for plan, consumer group, and directives creation:

begin
 dbms_resource_manager.create_pending_area();
end;
/

Create the penalty box resource plan:

begin
 dbms_resource_manager.create_plan(
  plan => 'PENALTY_BOX_PLAN',
  comment => 'Resource plan to limit execution time');
end;
/

Create the penalty box resource consumer group:

begin
 dbms_resource_manager.create_consumer_group(
  consumer_group => 'PENALTY_BOX_GROUP',
  comment => 'Resource consumer group to limit execution time');
end;
/ 

Create the penalty box resource plan directives.  The OTHER_GROUPS directive essentially does nothing, but is necessary.  If you try to submit the resource plan without the OTHER_GROUPS directive, you will get this error message when attempting to validate the pending area:

begin
*
ERROR at line 1:
ORA-29377: consumer group OTHER_GROUPS is not part of top-plan PENALTY_BOX_PLAN
ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3640
ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3691
ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3703
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 746
ORA-06512: at line 2

As mentioned previously, we are setting an execution limit of 60 seconds.

begin
  dbms_resource_manager.create_plan_directive(
    PLAN=>'PENALTY_BOX_PLAN',
    GROUP_OR_SUBPLAN=>'PENALTY_BOX_GROUP',
    COMMENT=>'Kill statement after exceeding total execution time',
    SWITCH_GROUP=>'CANCEL_SQL',
    SWITCH_FOR_CALL=>TRUE,
    SWITCH_TIME_IN_CALL=>60,
    SWITCH_ESTIMATE=>false);
  dbms_resource_manager.create_plan_directive(
    PLAN=> 'PENALTY_BOX_PLAN',
    GROUP_OR_SUBPLAN=>'OTHER_GROUPS',
    COMMENT=>'leave others alone',
    CPU_P1=>100);
end;
/ 

 

Validate the pending area for the new plan, consumer group, and directives.  If there is a problem with the plan, it will appear during validation

begin
 dbms_resource_manager.validate_pending_area();
end;
/ 

Submit the pending area for plan, consumer group, and directives

begin
 dbms_resource_manager.submit_pending_area();
end;
/ 

Now we have to grant privileges in order to assign consumer groups to users.

Create a new pending area for the users we want to put in our new penalty box:

begin
 dbms_resource_manager.create_pending_area();
end;
/ 

Grant the switch privilege for resource consumer groups to users or roles.  We need to execute this command for every user we want to be eligible to switch resource groups.

begin
 dbms_resource_manager_privs.grant_switch_consumer_group(
   grantee_name => 'SCOTT',
   consumer_group => 'PENALTY_BOX_GROUP',
   grant_option => FALSE);
end;
/ 

Assign the desired users users to the new resource consumer group

begin
 dbms_resource_manager.set_initial_consumer_group(
  user => 'SCOTT',
  consumer_group => 'PENALTY_BOX_GROUP');
end;
/ 

Validate the pending area before submitting it:

begin
 dbms_resource_manager.validate_pending_area();
end;
/ 

Submit the pending area:

begin
 dbms_resource_manager.submit_pending_area();
end;
/ 

Finally, specify the new plan to be used by the instance:

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = PENALTY_BOX_PLAN; 

To clear the resource plan from the instance, use this command.  Note the command uses two single quotation marks, not a double-quote:

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

 

Once our user’s behavior has been adjusted, we can remove them from the plan:

begin
 dbms_resource_manager.set_initial_consumer_group(
  user => 'SCOTT',
  consumer_group => 'DEFAULT_CONSUMER_GROUP');
end;
/ 

To place other users in the penalty box, follow these abbreviated steps:

begin
 dbms_resource_manager_privs.grant_switch_consumer_group(
   grantee_name => 'USER',
   consumer_group => 'PENALTY_BOX_GROUP',
   grant_option => FALSE);
end;
/ 


begin
 dbms_resource_manager.set_initial_consumer_group(
  user => 'USER',
  consumer_group => 'PENALTY_BOX_GROUP');
end;
/ 


Now to test out our penalty box.  This query was executed as user SCOTT:

This is my “hammer” query which will execute until it runs out of TEMP space:

set timing on
select distinct owner
from dba_constraints
where owner = (select owner
 from dba_constraints
 where constraint_type = 'C'
 and owner in(select owner
 from dba_constraints
 where constraint_type = 'P'
 and owner in(select owner
 from dba_constraints
 where constraint_type = 'R'
 and owner in(select owner
 from dba_constraints
 where constraint_type = 'U'))))
/

 

If the penalty box is working properly, you should see this error if the query exceeds the time limit set in the resource plan directive:

ERROR at line 14:
ORA-00040: active time limit exceeded - call aborted


Elapsed: 00:01:31.20

Notice that the query did not end EXACTLY when the sixty-second time limit had been reached.  The time limit is an approximation.  The query will look to end after the time limit at the next convenient interrupt as determined by the database, much like times when you kill a database session and instead of ending immediately, it gets ‘marked for kill’ instead.  The database is waiting for the next convenient interrupt.  As a side note, you can add the word ‘immediate’ to the end of your ‘alter system kill session’ to speed things along.

If you want to make changes to the resource plan, consumer group, or resource plan directives, you can use these commands to remove the existing ones.  Please note that you will need to create a pending area, execute the desired command, then submit the pending area in order for the command to take affect.

To remove a consumer group:

begin
DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP('PENALTY_BOX_GROUP'); 
end;
/

 

To remove a resource plan:

 

begin
DBMS_RESOURCE_MANAGER.DELETE_PLAN('PENALTY_BOX_PLAN');
end;
/

 

To remove a resource plan and all associated sub-plans and consumer groups:

 

begin
DBMS_RESOURCE_MANAGER.DELETE_PLAN_CASCADE('PENALTY_BOX_PLAN'); 
end;
/

 

To delete plan directives:

 

begin
DBMS_RESOURCE_MANAGER.DELETE_PLAN_DIRECTIVE (
 plan=>'PENALTY_BOX_PLAN', 
 group_or_subplan=>'PENALTY_BOX_GROUP');
end;
/

 

Enjoy and let the misbehaved beware!

Update – 4/3/17

Oracle Support published note 2250077.1 today on how to cancel SQL using the resource manager.  Nobody over there likely reads my blog, but it is quite a coincidence.  Just saying…

 

ORA-16957: SQL Analyze time limit interrupt

time_expired

First thing this morning, I was greeted with an email from my alert log monitor containing the error in the title.  This is one of the less cryptic error messages from an Oracle database, but what does it mean?

If you are an Oracle DBA and have been dealing with performance issues for any length of time, you are familiar with the SQL Tuning Advisor.  This tool is part of the Oracle Tuning Pack .  It takes one or more SQL statements as input, analyzes them, and displays recommendations to tune the statements with rationale and expected benefits.  These recommendations can include collecting object statistics, creating indexes, creating SQL profiles or SQL plan baselines, and even recommendations to rewrite.

The SQL Tuning Advisor has two modes – automatic and manual.  The automatic mode is referred to as the Automatic SQL Tuning Task or SQL Analyze for short.  This task when enabled runs nightly as part of the default maintenance plan.  Usually it runs happily in the background and goes mostly unnoticed.  Other times it generates an ORA-16957 error. This error means that it has not completed within its limit settings. You can view these settings like this:

COLUMN parameter_value FORMAT A30  

SELECT parameter_name, parameter_value  
FROM dba_advisor_parameters  
WHERE task_name = 'SYS_AUTO_SQL_TUNING_TASK'  
AND parameter_name IN ('TIME_LIMIT','DEFAULT_EXECUTION_TYPE',
'SQL_LIMIT','LOCAL_TIME_LIMIT');

PARAMETER_NAME                 PARAMETER_VALUE  

------------------------------ ------------------------------  

LOCAL_TIME_LIMIT               1200  
TIME_LIMIT                     7200  
DEFAULT_EXECUTION_TYPE         TUNE SQL
SQL_LIMIT                      -1

The time_limit value is the total time in seconds that the SQL Analyze job is allowed per execution within the maintenance window.  The local_time_limit is the value in seconds that the SQL Analyze job is allowed to spend analyzing each individual SQL statement.  The sql_limit parameter is the number of SQL statements to examine per execution of the job.  The -1 number indicates no limit to the number of SQL statements the job can review within its time limit.

How does it determine which statements need to be analyzed?  From the Oracle documentation, the following quote is taken:

The automated SQL tuning task does not process the following types of SQL:

  • Ad hoc SQL statements or SQL statements that do not repeat within a week
  • Parallel queries
  • Queries that take too long to run after being SQL profiled, so that it is not practical for SQL Tuning Advisor to test execution
  • Recursive SQL

 You can run SQL Tuning Advisor on demand to tune the preceding types of SQL statements.

 

Can we find out what SQL statement caused the ORA-16957 error?

Get the execution_name for the long run day:

set lines 200 pages 100 
col error_message for a60 
col execution_name for a15 

select execution_name, advisor_name,to_char(execution_start,'dd-mon-yy hh:mi:ss'), to_char(execution_end,'dd-mon-yy hh:mi:ss'), status,error_message 
from dba_advisor_executions 
where task_name = 'SYS_AUTO_SQL_TUNING_TASK' 
order by execution_start desc;

Take the name of the execution you are interested in and place it into the query below to identify the ID of the SQL causing the overrun.  If a sufficient amount of time has passed, you may not get any data from this query.

SELECT sql_id, sql_text FROM dba_hist_sqltext 
WHERE sql_id IN (SELECT attr1 FROM dba_advisor_objects 
WHERE execution_name = '&execution_name' 
AND task_name = 'SYS_AUTO_SQL_TUNING_TASK' 
AND type = 'SQL' AND bitand(attr7,64) <> 0 );


 Then you can take the SQL_ID from the query above and run it through the manual mode of the SQL Tuning Advisor:

Create the tuning task:

DECLARE 
l_sql_tune_task_id VARCHAR2(100); 
BEGIN 
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task ( 
sql_id => '&SQL_ID', 
scope => DBMS_SQLTUNE.scope_comprehensive, 
time_limit => 99999,   --I use a large time out value 
task_name => 'my_tuning_task', 
description => 'Tuning task for statement blahblah.'); 
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id); 
END; 
/

 

Execute the tuning task:

BEGIN 
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_tuning_task'); 
end; 
/

When the task is complete, get the report:

SET LONG 5000 
SET LONGCHUNKSIZE 5000 
SET LINESIZE 200 
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('my_tuning_task') from DUAL;

Be sure to drop the tuning task before executing another.  NOTE – If the tuning task results recommend using a SQL Profile, then you MUST accept the profile before dropping the tuning task.  Otherwise, the recommended profile will be lost and you will need to run the tuning task again.

BEGIN
  dbms_sqltune.drop_tuning_task (task_name => 'my_tuning_task');
END;
/

 

In the event that the ORA-16957 error is seen on a regular basis, you may need to adjust the SQL Analyze task limits or limit the number of SQL statements analyzed. Use the DBMS_SQL_AUTOTUNE.SET_TUNING_TASK_PARAMETER procedure to change these values.  Here is an example of changing the time limit to 7200 seconds (2 hours):

BEGIN 
DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER(task_name => 'SYS_AUTO_SQL_TUNING_TASK', 
parameter => 'TIME_LIMIT', value => 7200); 
END; 
/

 

If you need to determine if the SQL Analyze job is running against your database, use this query:

SELECT CLIENT_NAME, STATUS
 FROM   DBA_AUTOTASK_CLIENT
 WHERE  CLIENT_NAME = 'sql tuning advisor';
  
 CLIENT_NAME          STATUS
 -------------------- --------
 sql tuning advisor   ENABLED

 

You can disable or enable the SQL Analyze job with these commands:

BEGIN
   DBMS_AUTO_TASK_ADMIN.ENABLE (
     client_name => 'sql tuning advisor'
 ,   operation   => NULL
 ,   window_name => NULL
 );
 END;
 /


 BEGIN
   DBMS_AUTO_TASK_ADMIN.DISABLE (
     client_name => 'sql tuning advisor'
 ,   operation   => NULL
 ,   window_name => NULL
 );
 END;
 /

If you are interested in viewing the latest SQL Analyze report:

VARIABLE l_report CLOB;

BEGIN
:l_report := DBMS_SQLTUNE.report_auto_tuning_task(
begin_exec => NULL,
end_exec => NULL,
type => DBMS_SQLTUNE.type_text,
level => DBMS_SQLTUNE.level_typical,
section => DBMS_SQLTUNE.section_all,
object_id => NULL,
result_limit => NULL
);
END;
/

SET LONG 1000000
PRINT :l_report

If this task is enabled in your database, then take advantage of its analysis.  Regular viewing of its output may help identify ‘usual suspects’ that continue to run poorly.

 

Blast from the past – Rollback segments

Those of us that have worked with Oracle databases over the years have seen new features come along that relieved some of the more challenging aspects of database administration (i.e. things that drove us crazy).  One of the new features we were happy to see was automatic undo management in Oracle 9i release 1.  This new feature made the challenge of managing rollback segments obsolete. In the days before automatic undo management, the DBA not only had to create the rollback tablespace, but also size and manage the rollback segments it contained.

For newer DBA’s, these terms may not be familiar, so a little background may be in order.  Before the undo tablespace, there was the rollback tablespace.  This was where Oracle maintained changed, but uncommitted data for read consistency, much like the undo tablespace.  The rollback tablespace was comprised of rollback segments which needed to be manually created by the DBA.  The number and size of these segments was at the discretion of the DBA.  If the number or size of the rollback segments was insufficient for the amount of database activity, then contention for the rollback segment data blocks was a real, and common, occurrence.

Here is a typical database creation script from the Oracle 8i release 3 (8.1.7) documentation:

CREATE DATABASE rbdb1
    CONTROLFILE REUSE
    LOGFILE '/u01/oracle/rbdb1/redo01.log' SIZE 1M REUSE,
            '/u01/oracle/rbdb1/redo02.log' SIZE 1M REUSE,
            '/u01/oracle/rbdb1/redo03.log' SIZE 1M REUSE,
            '/u01/oracle/rbdb1/redo04.log' SIZE 1M REUSE
    DATAFILE '/u01/oracle/rbdb1/system01.dbf' SIZE 10M REUSE 
      AUTOEXTEND ON
      NEXT 10M MAXSIZE 200M 
    CHARACTER SET WE8ISO8859P1;

CREATE ROLLBACK SEGMENT rb_temp STORAGE (INITIAL 100 k NEXT 250 k);

-- Alter temporary system tablespace online before proceding
ALTER ROLLBACK SEGMENT rb_temp ONLINE;

-- Create additional tablespaces ...
-- RBS: For rollback segments
-- USERs: Create user sets this as the default tablespace
-- TEMP: Create user sets this as the temporary tablespace
CREATE TABLESPACE rbs
    DATAFILE '/u01/oracle/rbdb1/rbs01.dbf' SIZE 5M REUSE AUTOEXTEND ON
      NEXT 5M MAXSIZE 150M;
CREATE TABLESPACE users
    DATAFILE '/u01/oracle/rbdb1/users01.dbf' SIZE 3M REUSE AUTOEXTEND ON
      NEXT 5M MAXSIZE 150M;
CREATE TABLESPACE temp
    DATAFILE '/u01/oracle/rbdb1/temp01.dbf' SIZE 2M REUSE AUTOEXTEND ON
      NEXT 5M MAXSIZE 150M;

-- Create rollback segments.  
CREATE ROLLBACK SEGMENT rb1 STORAGE(INITIAL 50K NEXT 250K)
  tablespace rbs;
CREATE ROLLBACK SEGMENT rb2 STORAGE(INITIAL 50K NEXT 250K)
  tablespace rbs;
CREATE ROLLBACK SEGMENT rb3 STORAGE(INITIAL 50K NEXT 250K)
  tablespace rbs;
CREATE ROLLBACK SEGMENT rb4 STORAGE(INITIAL 50K NEXT 250K)
  tablespace rbs;

-- Bring new rollback segments online and drop the temporary system one
ALTER ROLLBACK SEGMENT rb1 ONLINE;
ALTER ROLLBACK SEGMENT rb2 ONLINE;
ALTER ROLLBACK SEGMENT rb3 ONLINE;
ALTER ROLLBACK SEGMENT rb4 ONLINE;

ALTER ROLLBACK SEGMENT rb_temp OFFLINE;
DROP ROLLBACK SEGMENT rb_temp ;

As you can see, you had to create and online one system rollback segment in the system tablespace before you could create the other tablespaces including rollback with its inclusive rollback segments.  You then had to bring the rollback segments you created online.  The temporary rollback segment was taken offline and removed at the end for housekeeping.

Rollback segments still exist in the newer versions of the Oracle database, but they are automatically created and managed by internal processes.  Additional segments are automatically created as needed.

Here is the output from a 9i release 1 database.  The sys-owned rollback segment takes the place of the rb_temp rollback segment in the 8i database creation script present during the creation of the database.  The difference here is that it is created automatically as part of the database creation process.  The remaining rollback segments were created automatically when the undo tablespace was created.

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
PL/SQL Release 9.0.1.1.1 - Production
CORE    9.0.1.1.1       Production
TNS for 32-bit Windows: Version 9.0.1.1.0 - Production
NLSRTL Version 9.0.1.1.1 - Production


SQL> select segment_name, owner from dba_rollback_segs;

SEGMENT_NAME                   OWNER
------------------------------ ------
SYSTEM                         SYS
_SYSSMU1$                      PUBLIC
_SYSSMU2$                      PUBLIC
_SYSSMU3$                      PUBLIC
_SYSSMU4$                      PUBLIC
_SYSSMU5$                      PUBLIC
_SYSSMU6$                      PUBLIC
_SYSSMU7$                      PUBLIC
_SYSSMU8$                      PUBLIC
_SYSSMU9$                      PUBLIC
_SYSSMU10$                     PUBLIC


Except for the segment naming convention, things look very similar in a version 11.2.0.4.0 database

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL> select segment_name, owner from dba_rollback_segs;

SEGMENT_NAME                   OWNER
------------------------------ ------
SYSTEM                         SYS
_SYSSMU10_1197734989$          PUBLIC
_SYSSMU9_1650507775$           PUBLIC
_SYSSMU8_517538920$            PUBLIC
_SYSSMU7_2070203016$           PUBLIC
_SYSSMU6_1263032392$           PUBLIC
_SYSSMU5_898567397$            PUBLIC
_SYSSMU4_1254879796$           PUBLIC
_SYSSMU3_1723003836$           PUBLIC
_SYSSMU2_2996391332$           PUBLIC
_SYSSMU1_3724004606$           PUBLIC

When automatic undo management was first introduced, you could choose between MANUAL or AUTO.  The manual setting was the default and I suspect this had to do with not wanting to force this new feature on anyone.  It wasn’t until 11g release 1 that the AUTO setting became the default for this parameter.  These days, I don’t know of anyone who still uses manual redo management, although it is still available as of 12c release 2.   Although there are occasional ORA-01555 (snapshot too old) errors encountered with the use of auto undo management, this is one feature that I think was worth the wait.

 

 

SQL Server Health Check v2

The new version of my SQL Server health check script adds the following improvements:

–Added queries for database start time and up time
–Added query to display memory usage of in-memory OLTP tables

----------------------------------------------------------Start of script
--sql_server_health_check_v2.sql
--v2
--This is a non-intrusive script written to give an overall informational view of a 
--SQL Server 2005-2016(CTP3) server instance for the purpose of first-time review.
--
--Version 2 additions
--Added queries for start time and up time
--Added query to display memory usage of in-memory OLTP tables


use master
GO

--Node name

SELECT SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [CurrentNodeName];

--Version

select @@version as [VERSION];


--Instance parameters

print N'Instance Parameter';


SELECT  *
FROM    sys.configurations
ORDER BY name ;


--Database listing

print N'Database list with Status and Recovery Model';

--Database startup time

print N'Start time';

SELECT DATEADD(ms,-sample_ms,GETDATE() )AS StartTime
FROM sys.dm_io_virtual_file_stats(1,1);


--Database uptime

print N'Up time';


DECLARE @server_start_time DATETIME,
@seconds_diff INT,
@years_online INT,
@days_online INT,
@hours_online INT,
@minutes_online INT,
@seconds_online INT ;

SELECT @server_start_time = login_time
FROM master.sys.sysprocesses
WHERE spid = 1 ;

SELECT @seconds_diff = DATEDIFF(SECOND, @server_start_time, GETDATE()),
@years_online = @seconds_diff / 31536000,
@seconds_diff = @seconds_diff % 31536000,
@days_online = @seconds_diff / 86400,
@seconds_diff = @seconds_diff % 86400,
@hours_online = @seconds_diff / 3600,
@seconds_diff = @seconds_diff % 3600,
@minutes_online = @seconds_diff / 60,
@seconds_online = @seconds_diff % 60 ;

SELECT @server_start_time AS server_start_time,
@years_online AS years_online,
@days_online AS days_online,
@hours_online AS hours_online,
@minutes_online AS minutes_online,
@seconds_online AS seconds_online ;


SELECT substring(name,1,40) AS name,  substring(state_desc,1,20) AS STATE, 
substring(recovery_model_desc,1,20) AS RECOVERY_MODEL
   FROM sys.databases
order by name;



--Database file size and growth settings

print N'Size and Growth';


select substring(b.name,1,40) AS DB_Name, substring(a.name,1,40) AS Logical_name, 
substring(a.filename,1,100) AS File_Name,
cast((a.size * 8.00) / 1024 as numeric(12,2)) as DB_Size_in_MB,
case when a.growth > 100 then 'In MB' else 'In Percentage' end File_Growth,
cast(case when a.growth > 100 then (a.growth * 8.00) / 1024
else (((a.size * a.growth) / 100) * 8.00) / 1024
end as numeric(12,2)) File_Growth_Size_in_MB,
case when ( maxsize = -1 or maxsize=268435456 ) then 'AutoGrowth Not Restricted' else 'AutoGrowth Restricted' end AutoGrowth_Status
from sysaltfiles a
join sysdatabases b on a.dbid = b.dbid
where DATABASEPROPERTYEX(b.name, 'status') = 'ONLINE'
order by b.name;


--Is this a Cluster Node?

SELECT 'Clustered', case when SERVERPROPERTY('IsClustered') = 0 then 'No'
else 'Yes' end;

--Nodes in Cluster

print N'Cluster Nodes';

SELECT * FROM fn_virtualservernodes();


--Is AlwaysOn enabled (2012 and above)?

SELECT 'AlwaysOn', case when SERVERPROPERTY('IsHadrEnabled') = 0 then 'No'
                        when SERVERPROPERTY('IsHadrEnabled') = 1 then 'Yes'
                        else SERVERPROPERTY('IsHadrEnabled') end;


--AlwaysOn status

declare @c int;
declare @rd nvarchar(60); 
declare @osd nvarchar(60);
declare @rhd nvarchar(60); 
declare @shd nvarchar(60); 
declare @csd nvarchar(60);
select @c = COUNT(name) 
from sys.all_objects
where name = 'dm_hadr_availability_replica_states';
if @c = 0
print N'No AlwaysOn Status';
else
select @rd = role_desc, @osd= case when operational_state_desc is null then 'Replica is not local'
                  else operational_state_desc end,
       @rhd = recovery_health_desc, @shd = synchronization_health_desc,
       @csd = connected_state_desc
from sys.dm_hadr_availability_replica_states;
print @rd
print @osd
print @rhd
print @shd
print @csd
  
	

--Memory usage per database

print N'Memory Usage per User Database';


SELECT 
                substring(DB_NAME(database_id),1,40) AS [Database Name]
                ,COUNT(*) * 8/1024.0 AS [Cached Size (MB)]
FROM 
                sys.dm_os_buffer_descriptors
WHERE 
                database_id > 4 
                AND database_id <> 32767
                AND db_name(database_id) <> 'SSISDB'
GROUP BY DB_NAME(database_id)
ORDER BY [Cached Size (MB)] DESC OPTION (RECOMPILE);



--Memory usage of in-memory OLTP tables

print N'In-memory OLTP table usage';
				      SELECT object_name(object_id) AS Name, *  
				      FROM sys.dm_db_xtp_table_memory_stats;


--Last backup time per database

SELECT substring(sdb.Name,1,40) AS DatabaseName,
COALESCE(CONVERT(VARCHAR(12), MAX(bus.backup_finish_date), 101),'-') AS LastBackUpTime
FROM sys.sysdatabases sdb
LEFT OUTER JOIN msdb.dbo.backupset bus ON bus.database_name = sdb.name
where sdb.Name <> 'tempdb'
GROUP BY sdb.Name;


--No log backups for FULL or BULK_LOGGED recovery model databases in last 30 days


print N'Databases with FULL or BULK_LOGGED recovery model and no log backups in last 30 days';


SELECT name AS at_risk_database
   FROM sys.databases
where recovery_model_desc in('FULL','BULK_LOGGED')
and name not in(
SELECT 
msdb.dbo.backupset.database_name AS DBName 
FROM msdb.dbo.backupmediafamily 
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id 
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 30)
and msdb..backupset.type = 'L'
group by msdb.dbo.backupset.database_name
);



--Databases with no backups at all in the last 30 says

print N'Databases with NO backups in last 30 days';

SELECT name AS at_risk_database
   FROM sys.databases
where name <> 'tempdb'
and name not in(
SELECT 
substring(msdb.dbo.backupset.database_name,1,40) AS DBName 
FROM msdb.dbo.backupmediafamily 
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id 
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 30)
group by msdb.dbo.backupset.database_name
);


--Backups for the previous week per database

print N'All backups for previous week';

SELECT 
CONVERT(CHAR(40), SERVERPROPERTY('Servername')) AS Server, 
substring(msdb.dbo.backupset.database_name,1,40) AS DBName, 
msdb.dbo.backupset.backup_start_date, 
msdb.dbo.backupset.backup_finish_date, 
msdb.dbo.backupset.expiration_date, 
CASE msdb..backupset.type 
WHEN 'D' THEN 'Database' 
WHEN 'L' THEN 'Log'
WHEN 'F' THEN 'File'
WHEN 'P' THEN 'Partial'
WHEN 'I' THEN 'Differential database'
WHEN 'G' THEN 'Differential file'
WHEN 'Q' THEN 'Differential partial'
WHEN NULL THEN msdb..backupset.type 
END AS backup_type, 
msdb.dbo.backupset.backup_size, 
substring(msdb.dbo.backupmediafamily.logical_device_name,1,50) AS logical_device_name, 
substring(msdb.dbo.backupmediafamily.physical_device_name,1,50) AS physical_device_name, 
substring(msdb.dbo.backupset.name,1,50) AS backupset_name, 
substring(msdb.dbo.backupset.description,1,50) AS description 
FROM msdb.dbo.backupmediafamily 
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id 
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 7) 
ORDER BY 
msdb.dbo.backupset.database_name, 
msdb.dbo.backupset.backup_finish_date;


--Jobs that failed in the last 24 hours

print N'Jobs Failing in last 24 hours';

-- Variable Declarations
DECLARE @PreviousDate datetime
DECLARE @Year VARCHAR(4)
DECLARE @Month VARCHAR(2)
DECLARE @MonthPre VARCHAR(2)
DECLARE @Day VARCHAR(2)
DECLARE @DayPre VARCHAR(2)
DECLARE @FinalDate INT
-- Initialize Variables
SET @PreviousDate = DATEADD(dd, -1, GETDATE()) -- Last 1 day
SET @Year = DATEPART(yyyy, @PreviousDate)
SELECT @MonthPre = CONVERT(VARCHAR(2), DATEPART(mm, @PreviousDate))
SELECT @Month = RIGHT(CONVERT(VARCHAR, (@MonthPre + 1000000000)),2)
SELECT @DayPre = CONVERT(VARCHAR(2), DATEPART(dd, @PreviousDate))
SELECT @Day = RIGHT(CONVERT(VARCHAR, (@DayPre + 1000000000)),2)
SET @FinalDate = CAST(@Year + @Month + @Day AS INT)
-- Final Logic
SELECT substring(j.[name],1,40) AS JOB,
substring(s.step_name,1,40) AS Step,
h.step_id,
substring(h.step_name,1,40) AS Step,
h.run_date,
h.run_time,
h.sql_severity,
substring(h.message,1,100) AS Message,
h.server
FROM msdb.dbo.sysjobhistory h
INNER JOIN msdb.dbo.sysjobs j
ON h.job_id = j.job_id
INNER JOIN msdb.dbo.sysjobsteps s
ON j.job_id = s.job_id AND h.step_id = s.step_id
WHERE h.run_status = 0 -- Failure
AND h.run_date > @FinalDate
ORDER BY h.instance_id DESC;



--Missing indexes

print N'Missing Indexes';


SELECT substring(so.name,1,40) AS Name
, (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) as Impact
, ddmid.equality_columns
, ddmid.inequality_columns
, ddmid.included_columns
FROM sys.dm_db_missing_index_group_stats AS ddmigs
INNER JOIN sys.dm_db_missing_index_groups AS ddmig
ON ddmigs.group_handle = ddmig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS ddmid
ON ddmig.index_handle = ddmid.index_handle
INNER JOIN sys.objects so WITH (nolock)
ON ddmid.object_id = so.object_id
WHERE ddmigs.group_handle IN (
SELECT TOP (5000) group_handle
FROM sys.dm_db_missing_index_group_stats WITH (nolock)
ORDER BY (avg_total_user_cost * avg_user_impact)*(user_seeks+user_scans)DESC);



--Duplicate indexes

print N'Duplicate Indexes';

DECLARE @SCHEMANAME VARCHAR(30);
DECLARE @TABLENAME VARCHAR(127);
WITH ind_list AS(
  select o.schema_id, i.object_id, i.index_id,
    i.name, i.type_desc,
    i.is_unique, i.is_primary_key, 
    STUFF( (SELECT ',' + tc.name
            FROM sys.index_columns ic
              JOIN sys.columns tc
               ON tc.column_id = ic.column_id AND
                  tc.object_id = ic.object_id
            WHERE ic.object_id = i.object_id AND
                  ic.index_id = i.index_id 
              AND ic.is_included_column = 0
            ORDER BY ic.index_column_id
            FOR XML PATH ('') ),1,1,'' ) index_columns,
    STUFF( (SELECT ',' + tc.name
            FROM sys.index_columns ic
              JOIN sys.columns tc
               ON tc.column_id = ic.column_id AND
                  tc.object_id = ic.object_id
            WHERE ic.object_id = i.object_id AND
                  ic.index_id = i.index_id
               AND ic.is_included_column = 1
            ORDER BY ic.index_column_id
            FOR XML PATH ('') ),1,1,'' ) include_columns
  FROM sys.indexes i
    JOIN sys.objects o ON o.object_id = i.object_id
  WHERE i.index_id > 0 AND i.type_desc <> 'XML'
    AND object_name(i.object_id) LIKE @TABLENAME
    AND i.is_disabled = 0 
    AND schema_name(o.schema_id) LIKE @SCHEMANAME )
SELECT substring(schema_name(included_indexes.schema_id),1,30) AS owner, 
  object_name(included_indexes.object_id) table_name,
  (SELECT SUM(st.row_count) FROM sys.dm_db_partition_stats st
   WHERE st.object_id = included_indexes.object_id
     AND st.index_id < 2 ) num_rows,
  included_indexes.name included_index_name, 
  included_indexes.index_columns included_index_columns, 
  included_indexes.include_columns
       included_index_include_columns,
  included_indexes.type_desc included_index_type, 
  included_indexes.is_unique included_index_uniqueness,
  included_indexes.is_primary_key included_index_PK,
  (SELECT SUM(a.total_pages) * 8 FROM sys.allocation_units a
    JOIN sys.partitions p ON a.container_id = p.partition_id
   WHERE p.object_id = included_indexes.object_id AND
     p.index_id = included_indexes.index_id
  ) included_index_size_kb,
  including_indexes.name including_index_name, 
  including_indexes.index_columns including_index_columns, 
  including_indexes.include_columns
       including_index_include_columns,
  including_indexes.type_desc including_index_type, 
  including_indexes.is_unique including_index_uniqueness,
  including_indexes.is_primary_key including_index_PK,
  (SELECT SUM(a.total_pages) * 8 FROM sys.allocation_units a
     JOIN sys.partitions p ON a.container_id = p.partition_id
   WHERE p.object_id = including_indexes.object_id AND
    p.index_id = including_indexes.index_id
  ) including_index_size_kb
FROM ind_list included_indexes
  JOIN ind_list including_indexes
    ON including_indexes.object_id = included_indexes.object_id
  JOIN sys.partitions ing_p 
   ON ing_p.object_id = including_indexes.object_id AND
      ing_p.index_id = including_indexes.index_id
  JOIN sys.allocation_units ing_a
   ON ing_a.container_id = ing_p.partition_id
WHERE including_indexes.index_id <> included_indexes.index_id
  AND LEN(included_indexes.index_columns) <= LEN(including_indexes.index_columns) AND included_indexes.index_columns + ',' = SUBSTRING(including_indexes.index_columns,1, LEN(included_indexes.index_columns + ',')) ORDER BY 2 DESC; --High index fragmentation check print N'Index with HIGH Fragmentation'; EXEC sp_MSforeachdb ' USE [?] SELECT ''?'' AS DB_NAME, QUOTENAME(sysind.name) AS [index_name], indstat.* FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, ''LIMITED'') AS indstat INNER JOIN sys.indexes sysind ON indstat.object_id = sysind.object_id AND indstat.index_id = sysind.index_id where avg_fragmentation_in_percent >= 30
ORDER BY avg_fragmentation_in_percent DESC;
'

use master
GO


--Wait stats

print N'Wait Stats';

SELECT *
FROM sys.dm_os_wait_stats
where wait_time_ms > 10000
ORDER BY wait_time_ms DESC;


--Users and roles

print N'Users and Roles';

WITH Roles_CTE(Role_Name, Username)
AS
(
	SELECT 
		User_Name(sm.[groupuid]) as [Role_Name],
		user_name(sm.[memberuid]) as [Username]
	FROM [sys].[sysmembers] sm
)

SELECT  
    Roles_CTE.Role_Name,
    [DatabaseUserName] = princ.[name],
    [UserType] = CASE princ.[type]
                    WHEN 'S' THEN 'SQL User'
                    WHEN 'U' THEN 'Windows User'
                    WHEN 'G' THEN 'Windows Group'
                    WHEN 'A' THEN 'Application Role'
                    WHEN 'R' THEN 'Database Role'
                    WHEN 'C' THEN 'User mapped to a certificate'
                    WHEN 'K' THEN 'User mapped to an asymmetric key'
                 END
FROM 
    sys.database_principals princ 
JOIN Roles_CTE on Username = princ.name
where princ.type in ('S', 'U', 'G', 'A', 'R', 'C', 'K')
ORDER BY princ.name;


--Job information

print N'Job Information';


SELECT	 [JobName] = [jobs].[name]
		,[Category] = [categories].[name]
		,[Owner] = SUSER_SNAME([jobs].[owner_sid])
		,[Enabled] = CASE [jobs].[enabled] WHEN 1 THEN 'Yes' ELSE 'No' END
		,[Scheduled] = CASE [schedule].[enabled] WHEN 1 THEN 'Yes' ELSE 'No' END
		,[Description] = [jobs].[description]
		,[Occurs] = 
				CASE [schedule].[freq_type]
					WHEN   1 THEN 'Once'
					WHEN   4 THEN 'Daily'
					WHEN   8 THEN 'Weekly'
					WHEN  16 THEN 'Monthly'
					WHEN  32 THEN 'Monthly relative'
					WHEN  64 THEN 'When SQL Server Agent starts'
					WHEN 128 THEN 'Start whenever the CPU(s) become idle' 
					ELSE ''
				END
		,[Occurs_detail] = 
				CASE [schedule].[freq_type]
					WHEN   1 THEN 'O'
					WHEN   4 THEN 'Every ' + CONVERT(VARCHAR, [schedule].[freq_interval]) + ' day(s)'
					WHEN   8 THEN 'Every ' + CONVERT(VARCHAR, [schedule].[freq_recurrence_factor]) + ' weeks(s) on ' + 
						LEFT(
							CASE WHEN [schedule].[freq_interval] &  1 =  1 THEN 'Sunday, '    ELSE '' END + 
							CASE WHEN [schedule].[freq_interval] &  2 =  2 THEN 'Monday, '    ELSE '' END + 
							CASE WHEN [schedule].[freq_interval] &  4 =  4 THEN 'Tuesday, '   ELSE '' END + 
							CASE WHEN [schedule].[freq_interval] &  8 =  8 THEN 'Wednesday, ' ELSE '' END + 
							CASE WHEN [schedule].[freq_interval] & 16 = 16 THEN 'Thursday, '  ELSE '' END + 
							CASE WHEN [schedule].[freq_interval] & 32 = 32 THEN 'Friday, '    ELSE '' END + 
							CASE WHEN [schedule].[freq_interval] & 64 = 64 THEN 'Saturday, '  ELSE '' END , 
							LEN(
								CASE WHEN [schedule].[freq_interval] &  1 =  1 THEN 'Sunday, '    ELSE '' END + 
								CASE WHEN [schedule].[freq_interval] &  2 =  2 THEN 'Monday, '    ELSE '' END + 
								CASE WHEN [schedule].[freq_interval] &  4 =  4 THEN 'Tuesday, '   ELSE '' END + 
								CASE WHEN [schedule].[freq_interval] &  8 =  8 THEN 'Wednesday, ' ELSE '' END + 
								CASE WHEN [schedule].[freq_interval] & 16 = 16 THEN 'Thursday, '  ELSE '' END + 
								CASE WHEN [schedule].[freq_interval] & 32 = 32 THEN 'Friday, '    ELSE '' END + 
								CASE WHEN [schedule].[freq_interval] & 64 = 64 THEN 'Saturday, '  ELSE '' END 
							) - 1
						)
					WHEN  16 THEN 'Day ' + CONVERT(VARCHAR, [schedule].[freq_interval]) + ' of every ' + CONVERT(VARCHAR, [schedule].[freq_recurrence_factor]) + ' month(s)'
					WHEN  32 THEN 'The ' + 
							CASE [schedule].[freq_relative_interval]
								WHEN  1 THEN 'First'
								WHEN  2 THEN 'Second'
								WHEN  4 THEN 'Third'
								WHEN  8 THEN 'Fourth'
								WHEN 16 THEN 'Last' 
							END +
							CASE [schedule].[freq_interval]
								WHEN  1 THEN ' Sunday'
								WHEN  2 THEN ' Monday'
								WHEN  3 THEN ' Tuesday'
								WHEN  4 THEN ' Wednesday'
								WHEN  5 THEN ' Thursday'
								WHEN  6 THEN ' Friday'
								WHEN  7 THEN ' Saturday'
								WHEN  8 THEN ' Day'
								WHEN  9 THEN ' Weekday'
								WHEN 10 THEN ' Weekend Day' 
							END + ' of every ' + CONVERT(VARCHAR, [schedule].[freq_recurrence_factor]) + ' month(s)' 
					ELSE ''
				END
		,[Frequency] = 
				CASE [schedule].[freq_subday_type]
					WHEN 1 THEN 'Occurs once at ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_start_time]), 6), 5, 0, ':'), 3, 0, ':')
					WHEN 2 THEN 'Occurs every ' + 
								CONVERT(VARCHAR, [schedule].[freq_subday_interval]) + ' Seconds(s) between ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_start_time]), 6), 5, 0, ':'), 3, 0, ':') + ' and ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_end_time]), 6), 5, 0, ':'), 3, 0, ':')
					WHEN 4 THEN 'Occurs every ' + 
								CONVERT(VARCHAR, [schedule].[freq_subday_interval]) + ' Minute(s) between ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_start_time]), 6), 5, 0, ':'), 3, 0, ':') + ' and ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_end_time]), 6), 5, 0, ':'), 3, 0, ':')
					WHEN 8 THEN 'Occurs every ' + 
								CONVERT(VARCHAR, [schedule].[freq_subday_interval]) + ' Hour(s) between ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_start_time]), 6), 5, 0, ':'), 3, 0, ':') + ' and ' + 
								STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [schedule].[active_end_time]), 6), 5, 0, ':'), 3, 0, ':')
					ELSE ''
				END
		,[AvgDurationInSec] = CONVERT(DECIMAL(10, 2), [jobhistory].[AvgDuration])
		,[Next_Run_Date] = 
				CASE [jobschedule].[next_run_date]
					WHEN 0 THEN CONVERT(DATETIME, '1900/1/1')
					ELSE CONVERT(DATETIME, CONVERT(CHAR(8), [jobschedule].[next_run_date], 112) + ' ' + 
						 STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR(8), [jobschedule].[next_run_time]), 6), 5, 0, ':'), 3, 0, ':'))
				END
FROM	 [msdb].[dbo].[sysjobs] AS [jobs] WITh(NOLOCK) 
		 LEFT OUTER JOIN [msdb].[dbo].[sysjobschedules] AS [jobschedule] WITh(NOLOCK) 
				 ON [jobs].[job_id] = [jobschedule].[job_id] 
		 LEFT OUTER JOIN [msdb].[dbo].[sysschedules] AS [schedule] WITh(NOLOCK) 
				 ON [jobschedule].[schedule_id] = [schedule].[schedule_id] 
		 INNER JOIN [msdb].[dbo].[syscategories] [categories] WITh(NOLOCK) 
				 ON [jobs].[category_id] = [categories].[category_id] 
		 LEFT OUTER JOIN 
					(	SELECT	 [job_id], [AvgDuration] = (SUM((([run_duration] / 10000 * 3600) + 
																(([run_duration] % 10000) / 100 * 60) + 
																 ([run_duration] % 10000) % 100)) * 1.0) / COUNT([job_id])
						FROM	 [msdb].[dbo].[sysjobhistory] WITh(NOLOCK)
						WHERE	 [step_id] = 0 
						GROUP BY [job_id]
					 ) AS [jobhistory] 
				 ON [jobhistory].[job_id] = [jobs].[job_id];


--Existing linked server listing


print N'Linked Server Information';


declare @x int;
select @x = COUNT(name) 
from sys.all_objects
where name = 'Servers';
if @x <> 0
SELECT * 
--c.name, provider, data_source, is_remote_login_enabled, b.modify_date
FROM sys.Servers a
LEFT OUTER JOIN sys.linked_logins b ON b.server_id = a.server_id
LEFT OUTER JOIN sys.server_principals c ON c.principal_id = b.local_principal_id
where a.server_id <> 0;
else
exec sp_linkedservers;