The return of the Recyclebin

Last Friday, I received an email concerning a problem a DBA was having migrating objects from one tablespace to another.  They thought they had migrated all the objects, but they found some indexes when querying the DBA_SEGMENTS view.  These indexes were not visible in the DBA_INDEXES view, which was my first clue to the problem.  The next clue was when I found out all the indexes had a prefix of “BIN$”.  The DBA did not realize that these were objects that had been dropped previously and were sitting in the recyclebin.

The reyclebin  is a little-mentioned feature of the Oracle database that I blogged about back in 2011 when it saved me from an embarrassing and tricky recovery.  In honor of this unsung feature, I found the old blog, dusted it off, and updated it where needed.  Here we go…

What is recyclebin?

The Oracle recyclebin has been around since version 10g release 1 as part of a feature called Flashback Drop and in my opinion, continues to be a little-known and little-understood feature. It has saved me from potential data loss on at least one occasion, though, so I think it deserves some consideration.

The recyclebin is actually a data dictionary table containing information about dropped objects. Each named user account has their own recyclebin. Unless a user has the SYSDBA privilege, the only objects that the user has access to in the recyclebin are those that the user owns. A user can view his objects in the recyclebin using the following statement:

SELECT * FROM RECYCLEBIN;

When the recyclebin is enabled, dropped tables and any associated objects such as indexes, constraints, and nested tables are removed from the DBA_<objects> views that track database objects.  During this process, the table is renamed.  The new name has a telltale “BIN$” prefix placed on it.  These objects still occupy space in the tablespace where they originated. They continue to count against user space quotas, until purged from the recyclebin either manually or by the database because of tablespace space constraints.

When you drop an entire tablespace including its contents, the objects in the tablespace are NOT placed in the recycle bin and the database purges any entries in the recycle bin for objects located in the dropped tablespace. The database also purges any recycle bin entries for objects in a tablespace you drop. Likewise:

  • When you drop a user, any objects belonging to the user are NOT placed in the recyclebin and any objects in the recyclebin owned by that user are purged.
  • When you drop a cluster, its member tables are NOT placed in the recyclebin and any former member tables in the recyclebin are purged.
  • When you drop a type, any dependent objects such as subtypes are NOT placed in the recyclebin and any former dependent objects in the recyclebin are purged.

Recyclebin Settings

In 10g release 1, the recyclebin was enabled by default and there was only an undocumented parameter to disable it.  This parameter was _recyclebin which defaulted to a value of TRUE.  The recyclebin initialization parameter was introduced in 10g release 2 and had a default setting of ON It was a dynamic parameter and could be enabled or disabled (ON or OFF) with either an ‘alter session’ or an ‘alter system’ command.  Oracle 11g release 1 made no changes to the basic feature, except to add the ‘deferred’ option to the ‘alter session’ command to allow subsequent, but not the current, session to be affected.  Release 2 remained unchanged from release 1.  These features remain unchained into 12c release 2.  Also, 12c includes more table-associated objects that is moves to the recyclebin when a table is dropped.  The full list of these objects are:

Nested tables

LOB segments

Indexes

Constraints (excluding foreign key constraints)

Triggers

Clusters

To find out if the recyclebin is enabled, except for 10g release 1, you can execute ‘show parameter recyclebin’ from a SQL prompt as a user with sysdba privileges.

SQL> show parameter recyclebin

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
recyclebin                           string      on

I would recommend keeping the recyclebin enabled at all times, except during an upgrade, to add an extra layer of recovery options to the existing environment.

Using Recyclebin

Once the recyclebin is enabled, you don’t have to give much thought to its use. Every time a ‘drop table’ command is executed, the table and its associated objects are placed in the recyclebin and renamed with a system-generated naming standard. Again, the objects actually remain in their native tablespaces.  Because objects age out of the recyclebin based on demands for space, it is always a good idea to back up a table with an export or execute create table as select (CTAS) to create a backup table before dropping it. This is especially true for dropping very large tables which may not fit in the recyclebin.

I was unable to find any information on exactly how much space is allocated to the recyclebin, but suspect that it can consume all available space in the respective tablespace where the objects reside.

To find out how much space per owner and tablespace recyclebin data is using, you can use this query:

select owner, tablespace_name, round(sum(bytes)/1024/1024/1024,1) "RECYCLE_GB"

from dba_segments

where segment_name like 'BIN$%'

group by owner, tablespace_name 

order by owner, tablespace_name;

In the event that you do not want a table to be preserved in the recyclebin, the addition of the ‘purge’ option at the end of the ‘drop table’ statement ensures that the recyclebin bin is bypassed and the table as well as the associated objects are dropped from the database.  Just ensure you have a backup of the table, just in case.

 

Recovering tables from the recyclebin

To recover a table from the recyclebin, you need to know if it is available. Use the following query to get this information (the owner was designated to limit the output):

select owner, object_name, original_name, droptime

from dba_recyclebin

where owner = ‘CMN’;




OWNER                          OBJECT_NAME                    ORIGINAL_NAME                    DROPTIME

------------------------------ ------------------------------ -------------------------------- -------------------

CMN                            BIN$o5lg4yWjH3ngRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:23:12:42

CMN                            BIN$o5lg4yWiH3ngRAAUT0rhuQ==$0 CMN_BASE_CONTACT_PK              2011-05-18:23:12:42

CMN                            BIN$o5icV6rtByXgRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:22:51:07

CMN                            BIN$o5icV6rfByXgRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:22:17:45

CMN                            BIN$o5lg4yWvH3ngRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:23:15:11

CMN                            BIN$o5icV6rgByXgRAAUT0rhuQ==$0 EXECUTE_TABLE                    2011-05-18:22:49:58

CMN                            BIN$o5mU5fo6KF/gRAAUT0rhuQ==$0 EXECUTE_TABLE                    2011-05-18:23:30:15

CMN                            BIN$o5icV6ruByXgRAAUT0rhuQ==$0 EXECUTE_TABLE                    2011-05-18:22:53:23

CMN                            BIN$o5mU5foqKF/gRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:23:27:15

CMN                            BIN$o5mU5fo5KF/gRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:23:30:08

CMN                            BIN$o5mU5fo4KF/gRAAUT0rhuQ==$0 CMN_BASE_CONTACT_PK              2011-05-18:23:30:08

CMN                            BIN$o5mU5fpGKF/gRAAUT0rhuQ==$0 CMN_BASE_CONTACT                 2011-05-18:23:32:02

CMN                          BIN$o5mU5fpHKF/gRAAUT0rhuQ==$0 EXECUTE_TABLE                    2011-05-18:23:32:02

Here you can see which tables for the CMN user are in the recyclebin and when they were dropped.  You may have noticed that there are several versions of the BASE_CONTACT table in this recyclebin differentiated by their droptime value.

To also help determine which table holds the data you want recovered, you can query recyclebin tables like regular tables.

 select count(*)

from cmn."BIN$o5mU5fpGKF/gRAAUT0rhuQ==$0";




  COUNT(*)

----------

        89

It is important to use both the owner identifier (CMN in this case) and double-quotation marks because of the special characters in the recyclebin table name.

Once the table you want to recover has been identified, there are a few different methods by which you can recover the table or the individual data. The ‘flashback table <table_name> to before drop’ command will recover the latest copy of the table in the recyclebin. You can also specify the ‘rename to <alt_table_name>’ at the end of the command to recover the table to a different table name to recover specific data. In the case where there are several copies of the table in the recyclebin and you are not interested in the latest copy, you can do a simple create table as select (CTAS) operation using the recyclebin table name, not the original name. Remember to use double-quotation marks around the recyclebin table name due to the special characters.

When you restore a table from the recyclebin, dependent objects such as indexes DO NOT get their original names back; they retain their system-generated recyclebin names. You must manually rename dependent objects to restore their original names. If you plan to manually restore original names for dependent objects, ensure that you make note of each dependent object’s system-generated recyclebin name before you restore the table.

Purging the recyclebin

You can purge the recyclebin completely or for objects belonging to individual users. As stated earlier, each user account in the database will automatically have a recyclebin assigned to it. A user can simply execute the ‘purge recyclebin’ command to purge their own recyclebin. Users with the sysdba privilege can purse the entire recyclebin with the ‘purge dba_recyclebin’ command. Be careful, though, as this command also purges ALL users recyclebins as well. Purging the entire reyclebin should be reserved for upgrade situations, or unless otherwise directed by Oracle Support.

You can also perform selective purges of the recyclebin as a sysdba-privileged user. For example ‘purge tablespace tmpdata’ only purges those objects in the recyclebin belonging to the tmpdata tablespace regardless of which user owns them. Also, ‘purge tablespace tmpdata user jon’ will purge only those objects in the recyclebin belonging to the tmpdata tablespace owned by the named user jon. You can also purge individual tables from the recyclebin with the command ‘purge table <table_name>’. The table name in this command can be either the system-generated recyclebin name or the original name of the table.  As of Oracle version 12c release 2, you need to purge individual user’s objects out of the recyclebin by tablespace as a sysdba-privileged user.

The recyclebin is one of those features that you forget, but are very thankful when it is available and you need it.

Advertisements

Don’t modify your spfile (unless you are sure)

I have seen it twice in the past week, so I need to get the word out.  You need to test parameter changes BEFORE modifying your spfile.  In at least one of the cases I have seen in the past week, the DBA created a backup of their spfile before modifying it.  This is smart, but they did not realize they could use this backup copy for testing changes.

The environment was an Oracle 11g release 2 (11.2.0.4) three-node RAC system.  The DBA needed to change a non-modifiable (non-dynamic) parameter and restart the database to make the change permanent.  They created a copy of the spfile, which is always a GREAT idea before modifying it, brought down the three instances, but was unable to bring them back up due to a problem with the parameter setting.  Now the spfile is bad and the database won’t start.  Now they need to recreate the spfile with the backup and start the process over.

What they could have done is make the parameter changes to the spfile backup and try to start one more of the instances using it instead of the spfile. In that case, if the parameter setting is bad, the spfile is unaffected and corrections can be made more swiftly with a regular editor.

In order to start the instance with the backup spfile (pfile), this command could have been used while connected to one of the instances through SQL*Plus as SYSDBA:

startup pfile=<path_and_name_of_pfile>

Once the instance starts with the parameter change, you can shut it down, modify the spfile by either changing the parameter with an ‘alter system’ command or simply replacing the spfile with the contents of the modified pfile with the ‘create spfile from pfile’ command and you are done.

Enjoy!

 

Database ‘shutdown immediate’ is slow or hangs

I was contacted by a DBA on a project I am on and they told me that a database they had executed a ‘shutdown immediate’ command on had been hanging for 20 minutes without any progress.

This problem is typical in an environment where a remote application maintains a pool of connections to the database.  It is these remote connection processes that are preventing the database from closing immediately.

I have not seen this issue in an Oracle-on-Windows environment, so I will just concentrate on resolving this issue in a Unix-type environment.  First, you need to see if you do indeed have remote connections:

$ >ps -ef | grep -v grep | grep LOCAL=NO
 oracle 29257 18960   0 05:31:03 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13255 18960   0   Sep 04 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 29187 18960   0 05:31:01 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13277 18960   0   Sep 04 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 29283 18960   0 05:31:07 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13285 18960   0   Sep 04 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 29191 18960   0 05:31:02 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 29290 18960   0 05:31:18 ?     0:05 oracleDUMMY (LOCAL=NO)
 oracle 29182 18960   0 05:31:00 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13269 18960   0   Sep 04 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13249 18960   0   Sep 04 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 13251 18960   0   Sep 04 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 29288 18960   0 05:31:17 ?     0:17 oracleDUMMY (LOCAL=NO)
 oracle 29292 18960   0 05:31:18 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 29281 18960   0 05:31:07 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13283 18960   0   Sep 04 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 29285 18960   0 05:31:07 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 17496 18960   0 10:27:26 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 13267 18960   0   Sep 04 ?     0:01 oracleDUMMY (LOCAL=NO)
 oracle 13288 18960   0   Sep 04 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 10164 18960   0 12:41:18 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle  3579 18960   0 09:00:08 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 12425 18960   0 12:56:18 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 13275 18960   0   Sep 04 ?     0:00 oracleDUMMY (LOCAL=NO)
 oracle 29185 18960   0 05:31:01 ?     0:00 oracleDUMMY (LOCAL=NO)

Now that we know there are remote connections, they need to be removed so the database can continue the shutdown process.  It would take some time to issue ‘kill -9’ commands individually for each of these processes, so I came up with method to kill them automatically.  You just need to isolate the OS process for each remote connection:

$ >ps -ef | grep -v grep | grep LOCAL=NO | awk '{print $2}'
 29257
 13255
 29187
 13277
 29283
 13285
 29191
 29290
 29182
 13269
 13249
 13251
 29288
 29292
 29281
 13283
 29285
 17496
 13267
 13288
 10164
 3579
 12425
 13275
 29185

Once we isolate the OS process ID’s, we can plug them into a command that will loop through and perform a ‘kill -9’ on each:

for i in $(`ps -ef | grep -v grep | grep LOCAL=NO | awk '{print $2}'`)
 do
 kill -9 $i
 done

The command will execute immediately after pressing <ENTER> after the word ‘done’.  I gave this process to the DBA who contacted me and it resolved their issue.  Try this the next time your database will not shutdown immediately.

Generating test data sets with Swingbench

Reading about new features is great, but finding data to test them out on is  sometimes troublesome.  As DBAs, we’ve created test databases countless times, but the majority of the time they are empty.

Oracle does provide sample schemas that can be installed automatically by using the database configuration assistant (DBCA) when creating a database or manually by executing scripts under the Oracle software home directory.  Consult the documentation of your particular version for more details.  While these sample schemas do provide a framework that mimics a common layout, they lack the volume of data that typical production systems have.  If your employer allows “sandboxes” to be created to test new versions and features then you can stop reading here because you are one of the fortunate ones.  However, if you do not have this opportunity but would still like to create a sizeable data set to work with, there is a utility called Swingbench that fills this need very well.

Swingbench was created by Dominic Giles (http://dominicgiles.com/index.html) as an Oracle database load generator, but has since become a very useful suite of tools including Data Generator, Trace Analyzer, and several monitoring tools.  Best of all, these tools are free to use.  I would recommend that you provide a small donation if you find them useful, though.

Quoting the creator, “Data Generator is a utility to populate, create, and load tables with semi random data. Users can specify sequences, random text, random numbers, database columns, and files as sources for tables.”  This sounds pretty good.

I recently downloaded Swingbench version 2.5 which includes the Data Generator utility and worked with it on a Oracle VirtualBox virtual machine running Oracle Enterprise Linux (OEL) version 6 update 6 with and Oracle version 12.1.0.2 non-container database.  Here are the steps I followed to create my test data set:

  • Swingbench requires Java 8 JDK.  Download the jdk-8u111-linux-x64.rpm and install it (as the root user) with the command: “rpm -Uvh jdk-8u111-linux-x64.rpm”. Note that this file is for the 64-bit version of OEL.
  • Download the swingbench25971.zip file for version 2.5 from the website mentioned previously and place it in a location where the oracle user account has read, write, and execute privileges.
  • Unzip the swingbench25971.zip file.  it creates a “swingbench” directory in this location
  •  As the oracle user, set the following environment variables if they are not already set (actual values depend on your environment):
    • JAVA_HOME=/usr/bin/java
    • PATH=$JAVA_HOME/bin:$PATH
  • Go to the swingbench/bin directory.  You have the option to create the sales history schema (shwizard), the calling center schema (ccwizard), or the order entry schema (oewizard).
  • Once you execute the desired wizard file, the interface appears

dg1

 

  • After clicking Next, you have the option to create or drop your desired schema.

dg2

  • For the connection string, use //<servername>/<database_sid>. Make sure you can ping the host by the <servername> used and that there is a listener present providing service for the <database_sid>.  Leave the connection type as the default (jdbc thin).  Leave DBA username as ‘sys as sysdba’ and make sure the password is correct.  Press Next to connect.

dg3

  • If the connection is successful, you will see the next screen where you can choose the username, password, schema tablespace, and tablespace datafile of your data set schema.  I like to keep my data separated for easier management, so I elect to create a separate tablespace and datafile.

dg4

  • The next screen allows you to choose database options such as partitioning (range or none), compression (yes or no), tablespace type (bigfile or normal), and indexing (all, primary key, or none).  Be careful here.  Do not select options that you are not licensed for in systems that are eligible for license audits.

dg5

  • Sizing options come next.  I don’t suppose I need to tell you not to size your schema any bigger than you have space available.  What is good here is that two counters at the bottom will let you know the size of the schema tablespace and the size of the temp tablespace required based on your selection.  Click next when you are done.

dg6

  • Finally, select the level of parallelism and click finish to create your schema.

dg7

  • The wizard log popup appears and the main wizard updates to tell you the progress of the creation process

dg10

  • Once the build is over, you should see a “successful” message popup window.  Click Ok to acknowledge the message and close the window.  You then have the option to review the wizard log in the wizard log window or save it to a file.  Close both the wizard log and main wizard windows after you are done.

dg8

At this point, you have the option to generate other data sets if you want to.  Enjoy.

 

Sequences are not gap free!

mind the gap

Yesterday I encountered a trace file on a “poorly-performing” process with some interesting waits.  The trace was incomplete because the trace process was started once the process was identified as “poorly-performing”.  Even so, it was interesting.

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        0      0.00       0.00          0          0          0           0

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute     78      0.03       0.05          0        106        106          53
Fetch       25      0.01       0.03          0         75          0          25
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      103      0.05       0.08          0        181        106          78


********************************************************************************
Trace file: PROD2_ora_2406.trc
Trace file compatibility: 11.1.0.7
Sort options: exeela  fchela  prsela  
       1  session in tracefile.
       0  user  SQL statements in trace file.
       2  internal SQL statements in trace file.
       2  SQL statements in trace file.
       2  unique SQL statements in trace file.
  384089  lines in trace file.
    1507  elapsed seconds in trace file.

 

Ok, we have 1507 seconds of tracing, or slightly over 25 minutes.  Of that, only .08 seconds are accounted for between the recursive (system) and non-recursive (user) statements.  Where did the time go?  There were no event waits under the non-recursive statements section, but the wait events for the recursive statements was very telling.  Note that this is not a complete listing, I just filtered out any waits that did not add up to more than 1 second.

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  enq: SV -  contention                      289471        0.48       1210.91
  gc current grant 2-way                      22811        0.13         17.21
  db file sequential read                     57139        0.53         83.05
  gc current block 2-way                       2094        0.02          1.94

We now see that about 1211 seconds, or about 20 minutes of the trace, was spent on “enq: SV – contention”.  But what is that?  This event is sequence value (SV) contention and appears when there is a high demand for sequence values across multiple instances in a RAC environment.  I typically see “log file sync” waits that point to sequence contention issues in non-RAC environments.

When sequences are created they have a default cached value of 20.  This means that every time a sequence is accessed, the next 20 values are cached in memory for quick access.  This value needs to be much higher when the sequence is used to generate values for inserts when hundreds of thousands or millions of records are inserted in one process or routine.

I always recommend increasing the size of the sequence cache values when I see these type of situations, but I am often met with one of two very weak arguments.  One is that the client does not want to have to recreate the sequences to increase the cache value while the other is that the client wants to keep the cache small in order to avoid gaps in sequence values.  Let’s take these arguments one at a time.

First,  altering the sequence cache value dynamically has been available since Oracle database version 10g release 1.  In fact, there is a quote in the Oracle 10g release 1 SQL reference guide which states “Oracle recommends using the CACHE setting to enhance performance if you are using sequences in a Real Application Clusters environment.”  The rest of this argument is disproved by creating a test sequence and changing the cache value with a simple “alter sequence” command.

The second argument is the most exasperating for me because the client usually won’t budge from their position.  Although the client is correct when they state that a smaller cache size reduces the risk of sequence value gaps, they are mistaken when they assume that a reduced cache will ensure no sequence gaps.

If you are relying on sequences to generate primary keys for newly-inserted records, there is NO guarantee that there will be no sequence value gaps.  Even Tom Kyte is in agreement here (https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:29059659958024).

The closest you can get to no gaps using a sequence in this fashion is creating the sequence with the “INCREMENT BY 1”,  “NOCACHE”, and “ORDER” options together.  There will be a MAJOR performance cost using a sequence in this fashion for generating values during large insert activity.  Even with these values set, there is still the possibility of sequence value gaps.  Here is a version 11.2.04 demonstration of that fact:

SQL> create table jontab2 (jonkey number, jonstring varchar2(20));

Table created.

SQL> create sequence jonseq start with 1 increment by 1 nomaxvalue nocycle order nocache;

Sequence created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');


1 row created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> commit;

Commit complete.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

<I crash the instance here by killing the PMON process>

SQL> insert into jontab2 values(jonseq.nextval, 'String');
insert into jontab2 values(jonseq.nextval, 'String')
*
ERROR at line 1:
ORA-03135: connection lost contact
Process ID: 3657
Session ID: 125 Serial number: 7

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


<I restart the instance and login>

[oracle@rac1 ~]$ sqlplus jontest/semperf1

SQL*Plus: Release 11.2.0.4.0 Production on Fri Aug 4 09:10:01 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> insert into jontab2 values(jonseq.nextval, 'String');

1 row created.

SQL> commit;

Commit complete.

SQL> select *
  2  from jontab2
  3  order by jonkey;

    JONKEY JONSTRING
---------- --------------------
         1 String
         2 String
         9 String                        --------GAAAAAAP!!!!!!!

SQL> select jonseq.nextval
  2  from dual;

   NEXTVAL
----------
        10

 

Because I did not commit some newly-inserted records before the instance crashed, they were not written to the table even though the sequence incremented and created a gap in the key values of the table.  Even if you have a batch process that commits after every record inserted, you can’t account for instance crashes.

If you STILL intend to use a sequence alone to generate a “no gap” primary key, what happens if you have to delete records from the table?  Guess what, you now have gaps in your primary key values.  What are you going to do, re-index all the remaining records to remove the gaps?  I doubt it.  The best bet is to use the sequence and one or more other values in the table, perhaps even a timestamp, as the basis for a primary key value.  This way, you can increase the sequence cache and still have solid primary keys.  You just need to get over the “no gap sequence values” idea.

 

Oracle Multitenant – Part 2 – Creating, connecting to, removing, and querying CDBs and PDBs

Now that we have an understanding of the multitenant architecture from part 1, we want to add some practical experience starting with the creation of CDBs and PDBs.  My test environment will be Oracle VirtualBox v.5.1.22 guest running an Oracle Enterprise Linux 7.3 guest with Oracle version 12.2.0.1.0 software installed.

Creating CDBs and PDBs

NOTE – If you create a CDB under Oracle 12c release 1, the following is not supported:

  • Flashback Transaction Query
  • Flashback Transaction Back out
  • Database Change Notification
  • Continuous Query Notification
  • Client Side Cache
  • Heat map
  • Automatic Data Optimization
  • Oracle Streams
  • Oracle Fail Safe release 4.1.0 that ships with Oracle Database 12.1.0.2.

The preferred CDB creation method is through the Database Configuration Assistant (DBCA).  The DBCA wizard will guide you through the process and automatically execute the proper catalog scripts once the CDB is created.  The DBCA can also be used to generate all the scripts you need to create the CDB manually based on choices made through the DBCA create database wizard.  The DBCA can also generate a CDB creation template that you can use to run DBCA in other environments if you copy the template over.

Once you login to the OS as the Oracle software owner and set your environment, execute $ORACLE_HOME/bin/dbca

create_cdb1

The obligatory Oracle splash screen appears

create_cdb2

Choose the ‘create database’ option and proceed to the next screen

create_cdb3

We are going to choose ‘typical configuration’ here to save time.  On this screen, we can select the storage type, DB file location, character set, and designate the administrative (SYS and SYSTEM) passwords.  More importantly, we have the option to create the new database as a CDB or non-CDB database.  Since CDB is our goal here, we check the CDB box and designate the name of our inclusive pluggable database, then proceed.

create_cdb4

The summary screen appears to confirm our choices.

create_cdb5

If you like, you can watch the progress bar and the progress of the individual steps as the database is being built

create_cdb6

Acknowledge the finish screen when the process completes to exit the DBCA

create_cdb7

What if we had selected the ‘advanced configuration’ option earlier?  All the options under Typical configuration are automatically disabled.

create_cdb_adv1

Now we need to select the type of database (single-instance, RAC, or RAC one-node) and the characteristics (custom, general purpose, or data warehouse).  We’ll select single-instance and general purpose, then proceed.

create_cdb_adv2

There are some interesting choices on the next screen.  After providing a global database name and SID, you can choose a CDB or non-CDB database.  If you choose a CDB, you can opt for an empty CDB with no PDBs (except the PDB$SEED) or you can choose to create one or more PDBs at the same time.  Another option that is newly available in 12c release 2 is the option to create a separate UNDO tablespace for each PDB.

create_cdb_adv3

Storage options come next

create_cdb_adv4

Recovery options on this screen.  I chose to create a flash recovery area with the recommended size and enable archiving.

create_cdb_adv5

Listener selection is next.  There are no existing database listeners in the system, so I opt to create one with a default name and port number.  It is recommended to avoid using the default 1521 port for security reasons, but since this is a closed system, there should be no concerns.

create_cdb_adv6

At the next screen, I chose not to install Database Vault or Label Security

create_cdb_adv7

The next screen is multi-tabbed with options for memory allocation, sizing of processes, selection of character sets,  connection mode, and installation of sample schemas.  I choose the defaults in each section.

create_cdb_adv8

I opted to configure Enterprise Manager database express.  We may want to view our CDB options for backup and recovery later in a separate post.

create_cdb_adv9

Specify the administrative user credentials at the next screen

create_cdb_adv10

This screen prompts you with options to create the database, create a template of your choices for additional CDBs of this type, or the generation of scripts for manual database creation.  I opt for all three.  Why not?

create_cdb_adv11

Confirm your choices on the summary screen

create_cdb_adv12

If you like, you can watch the progress bar and the progress of the individual steps as the database is being built

create_cdb_adv13

Acknowledge the finish screen when the process completes to exit the DBCA

create_cdb_adv14

Creating a PDB

If you have a CDB and want to create a PDB or add a PDB, the DBCA is also the preferred method.

To create a new PDB, start DBCA. Once you get past the 12c splash screen, select ‘Manage pluggable databases’ on the first screen, then proceed.

create_pdb1

This screen contains all the PDB options you have in your current configuration such as create, delete, unplug, or configure.  The delete, unplug, and configure options will only by available if there is an existing PDB.  Select ‘create a pluggable database’, then proceed.

create_pdb2

Designate the CDB that will host your new PDB and give the admin account credentials for the designated CDB, then proceed.

create_pdb3

Here you can designate whether to create a new PDB from the ‘seed’ or create a new PDB from an existing PDB.  We are going to use the seed since we don’t have an existing PDB.

create_pdb4

Give a name to the new PDB and the credentials of the new administrative account for the PDB.  It is interesting to note here that the wizard is not looking for a typical SYS or SYSTEM account since these accounts already exist at the CDB level.

create_pdb5

Storage options are next as well as opting to create a default user tablespace.  The default user tablespace will be called USERS if you choose this option.  it is interesting to note here that even though the option is not given, the new PDB will have its own UNDO tablespace by default.

create_pdb6

Confirm your choices on the summary screen, then proceed.

create_pdb7

Watch the progress of the PDB creation.

create_pdb8

Acknowledge the finish screen when the process completes to exit the DBCA

create_pdb9

Other methods to create a PDB include:

  • Using the ‘create pluggable database’ command to create the PDB while connected to the CDB.
  • Plug an existing PDB into the CDB.
  • Using Oracle Cloud Control to guide you through the creation of the PDB.
  • Cloning an existing PDB to a new PDB with, or without data.
  • Creating an empty PDB and using Oracle Data Pump to import data from one database source into a PDB.

Starting and stopping the CDB

Starting and stopping a CDB is just like starting and stopping a non-CDB database.  Use the SQL*Plus utility (or Oracle Enterprise Manager) and issue the startup and shutdown commands as usual.  All of the options still exist such as startup force, shutdown abort and shutdown immediate.

Starting and stopping the PDB

First, determine the state of the PDB before issuing start or stop commands by connecting to the CDB with an account having SYSDBA, SYSOPER, SYSBACKUP or SYSDG privileges:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
 --------------------       ------------------
 PDB$SEED      READ ONLY
 JON12PDB1      MOUNTED

The PDB$SEED database is ALWAYS read only.  As the template database, it needs to maintain its integrity.  Since the PDB is already mounted, you just need to open it:

SQL> alter pluggable database JON12PDB1 open;

Pluggable database altered.

You can execute the commands directly without the ‘pluggable database’ syntax by changing to the PDB within the session first:

SQL> alter session set container=JON12PDB1;

Session altered.

SQL> shutdown
 Pluggable Database closed.

I would not recommend this method, however, since it would be easy to forget what container you are pointing to and issue a shutdown mistakenly against the CDB could make you think about updating your resume.

Other methods:

  • Use the ‘alter pluggable database’ command to start and stop the PDB when connected to the CDB with an account having SYSDBA, SYSOPER, SYSBACKUP or SYSDG privileges.
  • Using the SQL*Plus ‘startup pluggable database’ and ‘shutdown pluggable database’ commands when connected to the CDB with an account having SYSDBA, SYSOPER, SYSBACKUP or SYSDG privileges.

PDB startup options (make sure you are connected to the PDB using the ‘alter session set container’ command while in the CDB in the example above):

Open (default) – Open the PDB normally (startup pluggable database JON12PDB1 😉

Force – Force the PDB open (startup pluggable database JON12PDB1 force;)

Upgrade – Open the PDB in upgrade mode (startup pluggable database JON12PDB1 upgrade;)

Restricted – Open the PDB in restricted mode (startup pluggable database JON12PDB1 restricted;)

Read only – Open the PDB in read only mode (startup pluggable database JON12PDB1 open read only;)

For those who don’t want to have to execute separate startup commands for the PDBs after starting the CDB, there is a means by which you can set the default state per PDB after the CDB is started.

For example, after starting the CDB, set the state of your desired PDB, albeit OPEN (read/write) or OPEN READ ONLY, execute the SAVE STATE command from the CDB:

alter pluggable database <PDB_NAME> save state;

Now when the CDB starts, the respective PDB will automatically go to the saved state.  You can view the saved state of each PDB in the DBA_PDB_SAVED_STATES view.  Please note that the saved state feature for PDBs is only on Oracle software versions 12.1.0.2 or higher.

To clear the saved state of the PDB, issue the DISCARD STATE command:

alter pluggable database <PDB_NAME> discard state;

This command will set the state of the particular PDB back to the MOUNTED default.  Changing the saved state of the desired PDB is a simple matter of placing the PDB in the desired state and issuing the SAVE STATE command again.

PDB open options:

Open (default) – Open the PDB in read write mode (alter pluggable database JON12PDB1 open;)

Open read only – Open the PDB in read only mode (alter pluggable database JON12PDB1 open read only;)

Open resetlogs – Open the PDB after a point-in-time-recovery (PITR) (alter pluggable database JON12PDB1 open resetlogs;)

Open restricted – Open the PDB in restricted mode (alter pluggable database JON12PDB1 open restricted;)

Force – Force shutdown and restart (alter pluggable database JON12PDB1 open force;)

Upgrade  – Open in upgrade mode (alter pluggable database JON12PDB1 open upgrade;)

PDB shutdown options (make sure you are connected to the PDB using the ‘alter session set container’ command while in the CDB in the example above):

Normal (default) – Close the PDB normally, waiting for transactions to end (shutdown)

Shutdown Immediate – Close the PDB. Rollback any pending transaction. Flush the SGA (shutdown immediate)

Abort – Shutdown the PDB without delay. Dirty buffers may be written to disk (shutdown abort)

Transactional – Shutdown the PDB after all active transactions have completed (shutdown transactional)

PDB close options:

Close (default) – Close the PDB normally (alter pluggable database JON12PDB1 close;)

Immediate – Force the PDB closed (alter pluggable database JON12PDB1 close immediate;)

 

Connecting to the CDB/PDB

Accessing the CDB is the same as accessing a non-CDB database and we have already discussed accessing the PDB while connected to the CDB, but what about accessing the PDB directly from outside the CDB?  As I mentioned in my last post, listener services are automatically created for the CDB and each PDB on creation.  Here you can see the JON12CDB and JON12PDB1 services when checking the listener status.

listener_status

Make sure the CDB is started and the PDB is open.  You can query the v$pdbs view from the CDB to see the status of all the enclosed PDBs.  Once you are sure the PDB is open, you can add an entry to your tnsnames.ora file to access the PDB directly (The entry for the CDB should already be present):

JON12PDB1 =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = JON12PDB1)
 )
 )

You should then be able to access your PDB directly:

pdb_remote_connection

Dropping a CDB and PDB

Dropping or removing a CDB is the same as dropping a non-CDB database.  The only difference in the case of dropping a CDB is that all PDB databases beneath it will be removed as well.

Dropping or removing a PDB is a bit more complicated when performing the task manually, but not by much.  When connected to the CDB that contains the PDB, use the ‘alter pluggable database’ command to first close the PDB, then issue the ‘drop pluggable database <PDB_name> including datafiles;’ to complete the process.  Using the DBCA to drop a PDB is even easier as it performs the closing of the PDB automatically, if necessary.

Other PDB removal options:

  • The SQL prompt, using the drop pluggable database command.
  • Using Oracle Enterprise Manager Cloud Control to remove the PDB
  • Unplugging the PDB
  • Deleting the entire CDB.

 

Querying the CDB and PDB

Once you are connected to the CDB or PDB, queries function the same as they have in non-CDB databases.  What is more important in this stage of our multitenant experience is knowing about multitenant-specific commands and objects.  For instance, when you are connected to a CDB, you can use the command ‘show pdbs’ to get all associated PDBs listed with their current state:

SQL> show pdbs

CON_ID CON_NAME     OPEN MODE  RESTRICTED
 ---------- ------------------------------ ---------- ----------
 2 PDB$SEED     READ ONLY  NO
 3 JON12PDB1     READ WRITE NO

Remember the PDB$SEED PDB is ALWAYS open in read only mode.

The container ID (CON_ID) is an important identifier in the multitenant environment.  The ‘show con_id’ command displays the CON_ID of the current CDB or PDB you are in.  Notice the value changes when I switch from the CDB to the PDB with the ‘alter session set container’ command:

SQL> show con_id

CON_ID
 ------------------------------
 1
 SQL> alter session set container=jon12pdb1;

Session altered.

SQL> show con_id

CON_ID
 ------------------------------
 3

NOTE – Once you alter your session to switch from the CDB to a PDB, you cannot switch back to the CDB and must exit your session and open a new one or open a different session to access the CDB once more.

The ‘show con_name’ command is similar to the ‘show con_id’ except that the name and not the ID is displayed:

SQL> show con_name

CON_NAME
 ------------------------------
 CDB$ROOT

SQL> alter session set container=jon12pdb1;

Session altered.

SQL> show con_name

CON_NAME
 ------------------------------
 JON12PDB1

 

If you need a list of the PDB IDs and names, query the v$pdbs view:
SQL> select con_id, name
2  from v$pdbs;

CON_ID NAME
 ---------- ----------
 3 JON12PDB1

In some views, the CON_ID column has been added for multitenant identification as in the case of v$tablespace:

select a.con_id, a.name, b.name
 from v$tablespace a, v$pdbs b
 where a.con_id=b.con_id order by 1;

CON_ID NAME       NAME
 ---------- ---------- ----------
 2 TEMP       PDB$SEED
 2 UNDOTBS1   PDB$SEED
 2 SYSAUX     PDB$SEED
 2 SYSTEM     PDB$SEED
 3 TEMP       JON12PDB1
 3 UNDOTBS1   JON12PDB1
 3 SYSAUX     JON12PDB1
 3 USERS      JON12PDB1
 3 SYSTEM     JON12PDB1

NOTE – This is a listing from the CDB.  If you are connected to a PDB, you will only see the tablespaces for that particular PDB.

In other views, some filtering has been enabled, such as dba_tables.  In this case, you will see only tables associated with the particular CDB or PDB you are connected to:

SQL> select count(*)
 2  from dba_tables;

COUNT(*)
 ----------
 2106

SQL> alter session set container=jon12pdb1;

Session altered.

SQL> select count(*)
 2  from dba_tables;

COUNT(*)
 ----------
 2105

The difference in the counts here is that when you are connected to the PDB, you cannot see the CDB-specific objects.  The CDB_TABLES view allows you to identify tables as belonging to a CDB or PDB:

select a.con_id, b.name, count(*)
 from cdb_tables a, v$containers b
 where a.con_id=b.con_id
 group by a.con_id, b.name;

CON_ID NAME  COUNT(*)
 ---------- ---------- ----------
 1 CDB$ROOT     2106
 3 JON12PDB1     2105

This view lets you see the CDB and enclosed PDB tables when connected to the CDB.  However, it does not work the same when connected to the PDB:

SQL> alter session set container=jon12pdb1;

Session altered.

SQL> select a.con_id, b.name, count(*)
 from cdb_tables a, v$containers b
 where a.con_id=b.con_id
 group by a.con_id, b.name;
 CON_ID NAME  COUNT(*)
 ---------- ---------- ----------
 3 JON12PDB1     2105

 

Now that you are on the PDB, the view only lets you see tables associated with that particular PDB.

Here are some other admin views available:

V$CONTAINERS – Contains information on the containers in the database including the ROOT and all PDBs.

V$PDB_INCARNATION – Provides information on incarnations specific to given PDBs.

V$PDBS – Provides information on PDBs within the database

DBA_CONTAINER_DATA – Displays various container data for container data objects.

DBA_HIST_PDB_INSTANCE – Contains workload repository records for PDBs

DBA_PDB_HISTORY – Contains the history of a PDB. Only visible from within a specific PDB.

DBA_PDB_SAVED_STATES – Contains the saved state of a PDB.

DBA_PDBS Contains a list of the PDBs in the database. Only visible from the ROOT container.

PDB_ALERTS Provides information on specific PDB alerts.

PDB_PLUG_IN_VIOLATIONS Provides information on incompatibilities between the CDB and the PDB.

 

We’ve covered quite a bit of territory in this post.  Next up, cloning and migrating PDBs

 

Oracle Multitenant – Part 1 – What is multitenant?

The Oracle database multitenant architecture was introduced in version 12c release 1. This architecture consists of a container database (CDB) that contains one or more pluggable databases (PDB). The CDB acts as a metadata repository for the PDBs and a source of user accounts that are utilized across the PDBs called common users. The CDB and PDB share resources, but not data. A PDB template or “seed” database allows the quick deployment of PDBs out of the CDB. This architecture was created with consolidation in mind and according to the Oracle 12c release 1 documentation, will become the default database architecture as the non-CDB architecture is deprecated (https://docs.oracle.com/database/121/UPGRD/deprecated.htm#BABDBCJI).

My current work has been in non-CDB environments, but given that non-CDB databases are being deprecated, it is in my interest to start working with CDB databases. I plan to share this work with you in a series of posts for (hopefully) mutual benefit.

The multitenant architecture is a separately licensed option from Oracle, but there is an exception. If the CDB contains a single PDB, then no additional licensing is required.   This way, you can start converting non-CDB databases to CDB without having to accrue additional licensing cost.

The PDBs can be ‘plugged’ and ‘unplugged’ between container databases within limits. Access to the PDBs within a CDB is handled via SQL*Net. Whenever a PDB is created, a service of the same name is also created to handle the PDB connections and the service is automatically registered with the database listener.   It is recommended to change the name of this service for security reasons.

While the CDB and PDB share resources such as memory and processes, each has their own SYS and SYSAUX tablespace with associated objects.   There is one common UNDO tablespace at the CDB level (one per instance for RAC) to be shared, but each PDB has its own TEMP tablespace.

There is a laundry list of features in the Oracle Documentation that this new architecture provides involving consolidation:

  • Cost reduction: By consolidating hardware and sharing database memory and files, you reduce costs for hardware, storage, availability, and labor. For example, 100 PDBs on a single server share one database instance and one set of database files, thereby requiring less hardware and fewer personnel.
  • Easier and more rapid movement of data and code: By design, you can quickly plug a PDB into a CDB, unplug the PDB from the CDB, and then plug this PDB into a different CDB. The implementation technique for plugging and unplugging is similar to the transportable tablespace technique.
  • Easier management and monitoring of the physical database: The CDB administrator can attend to one physical database (one set of files and one set of database instances) rather than split attention among dozens or hundreds of non-CDBs. Backup strategies and disaster recovery are simplified.
  • Separation of data and code: Although consolidated into a single physical database, PDBs mimic the behavior of non-CDBs. For example, if user error loses critical data, a PDB administrator can use Oracle Flashback or point-in-time recovery to retrieve the lost data without affecting other PDBs.
  • Secure separation of administrative duties: A user account is common, which means that it can connect to any container on which it has privileges, or local, which means that it is restricted to a specific PDB. A CDB administrator can use a common user account to manage the CDB. A PDB administrator uses a local account to manage an individual PDB. Because a privilege is contained within the container in which it is granted, a local user on one PDB does not have privileges on other PDBs within the same CDB.
  • Ease of performance tuning: It is easier to collect performance metrics for a single database than for multiple databases. It is easier to size one SGA than 100 SGAs.
  • Support for Oracle Database Resource Manager: In a multitenant environment, one concern is contention for system resources among the PDBs running on the same computer. Another concern is limiting resource usage for more consistent, predictable performance. To address such resource contention, usage, and monitoring issues, you can use Oracle Database Resource Manager.
  • Fewer database patches and upgrades: It is easier to apply a patch to one database than to 100 databases, and to upgrade one database than to upgrade 100 databases.

The previous benefits were from the first release of CDB in Oracle version 12c release 1. Oracle 12c release 2 added the following features, although this is not a complete list:

  • CDBs can now support “thousands” of PDBs instead of the previous maximum of 252.
  • A PDB can now have a different character set than the CDB
  • Point-in-time copies of PDBs possible for development and testing purposes
  • PDB flashback now possible by setting up restore points
  • PDBs can have their own UNDO tablespaces
  • Upgrade a CDB with one or more PDBs in a single operation

 

Next up in this series – Creating, connecting to, removing, and querying CDBs and PDBs