How Eter Pani became Oracle Certified Master

I got my OCM. Hurray!
Now one of the common question people ask me is how to pass this OCM exam.
Firstly I am very grateful to Oracle to introduce this exam. Finally it is the real exam, but not test with set of answers. I clearly see complicatedness for examiners, but it is the real proof of DBA skills, not general memory skills.
Secondly this exam does not prove your exceptional knowledge of Oracle. It is the proof that you are fluent in all basic skills. During the exam everything works according to documentation. Personally I have collection of my favourite Metalink Notes and Advanced instructions that are used at least once a week. You do not need such things on exam.
The environment is ready. You do not need to reconfigure switches for RAC or install additional OS packages. What you are doing is only Oracle Database Administration, no OS specific. If something is really different between different UNIX OS, forget about it. It should not be part of the exam.
When I came to client site I frequently have no access to internet thus have a copy of Oracle Documentation locally. Moreover reading through local copy is actually faster then browsing through internet copy. I used to use local copy and it was really useful during exam.
Another my habit that I find useful is preparing all scripts in text file and then copy it to SQL*Plus or Shell window. If you need to rerun script or slightly alter it for a different skill-set you can reuse your one history. E.g. I store in this file backup/restore scripts.
You have 2 days, 14 hours, including lunch and breaks for 7 skill-sets. None of skill-set takes more then 2 hours. If you do not believe you can do something in less then 2 hours forget about it. Even if it would be on exam you would not be able to do it in time. Focus on things that you would be able to do.
The exam is based on 11.2g database. If something is different between patch sets again forget about it. Asking information specific for patch set is unfair to people who used to basic one, thus this question would not arrive on exam.
When you read through skill-set task at the beginning, read it up to the end. Mark for yourself tasks that would require you some investigation through the documentation. Mark for yourself tasks that you doubt to solve. Estimate time for each task. Start from the short and easy one and if you significantly overflow the time frame you set switch to the next task in your ordered list. If you have time you can came back and fix the issues later.
I recommend 15 minutes before end of skill-set to check the whole environment, there is special button for end state of the skill-set. 15 minutes should be enough to bring it to correct state.
Read tasks carefully, frequently tasks include markers how to avoid hidden rocks of the environment, e.g. check all options of the objects to create. If you would not follow it exactly the problems would make your life significantly harder.
Some tasks are not clear, you can ask your proctor for clarification. But proctor not always can rephrase task without violation of exam rules, if he could not provide explanation what is requested in a task follow “best practice”.
In general be concentrated, careful and have a lot of practice before exam. I passed preparation courses but honestly it was just way to guarantee time and environment for training. You can do preparation yourself if your management and family would grant the opportunity to do it. If you have no such generous option apply for a preparation course, it is really value for money, very useful and informative. Course provide to you experience of working on the same machines that you would use on exam. In my case the machines was really physically the same, just different OS image. BTW try to became used to local keyboards of the country where you are going to pass the exam. English and US keyboards are different and this difference can be that point which consume the vital time on exam.
Good Luck.
Eter

Advertisements

NOLOGGING in numbers

Hi All
I have made small investigation about redo generation. From early days of my career I was remember that nologging operation is very performance effective but never try to quantify this very.
Every application can theoretically be split into 4 groups of tables (I use my personal names but hopefully it has sense):
1) Regular tables – contain valuable information need to be stored for legal and functional purposes. Stored as normal tables.
2) Staging tables – contain process lifetime specific information, easily re-creatable. Used for transferring information between sessions and report generation. Stored as regular tables or materialized views.
3) Session Temporary tables– contain process lifetime specific information, easily re-creatable. Used for reporting stored as GLOBAL TEMPORARY tables ON COMMIT PRESERVE.
4) Transaction Temporary tables– contain process lifetime specific information, easily re-creatable. Used for processing optimisation stored as GLOBAL TEMPORARY tables ON COMMIT DELETE.
By default all 4 groups generate REDO logs records that can be significant amount of resources. The redo information is valuable if we:
1) Support StandBy database
2) Information inside tables is valuable and have to be safe in case of database crush.
To make the standby or backup completely usable after a nologging statement is run, a mechanism other than database recovery must be used to get or create current copies of the affected blocks. You have to drop and recreate the object with the invalidated blocks or truncate it, using the program that maintains the object. Thus extra step to manage switchover/failover to standby database process have to be introduced.
Again based on my understanding the only business requirements for logging is to keep data from “Regular tables”. The safety of the data from other groups is not such important.
The only DML operation that can be optimised in terms of REDO log generation is INSERT with APPEND hint. (MERGE is actually presentation layer above INSERT thus can be treated together) . Hint APPEND if it works have one negative issue. The data in new table is not actually available until end of transaction.Due to the following error.
ORA-12838: cannot read/modify an object after modifying it in parallel
It linked to the fact that oracle could not make consistent model of block if there is no UNDO information. This actually makes using this hint on Global Temporary tables with ON COMMIT DELETE rows unreasonable. You can insert data but never be able to use it until it would be deleted.
Another fact that I have to highlight UPDATE and DELETE always generate REDO information. Thus if the table intensively update the gains would be minimal. Avoiding this operation on a temporary tables is another skills that developers have to be used to for optimal performance of your application.
There are 5 parameters that actually affect SEGMENT logging: Database LOGGING, Database FORCE LOGGING, Tablespace LOGGING, Tablespace FORCE LOGGING (Can be switched on tablespaces with “Regular tables” and switched off on tablespaces with “Staging tables” , Table LOGGIN. Global Temporary tables actually always in NOLOGGING mode thus we can assume for table groups “Session Temporary tables” and “Transaction Temporary tables” always have all parameters equal to NO. Production databases should always be in protected mode thus the value DATABASE LOGGING should always be in YES, it takes value NO outside of investigation.
To test I have created the table TEST.BIGTABLE (column1 NUMBER) with 39999960 rows and few tables to generate INSERT as SELECT statement from BIGTABLE dataset. The results are below.

Regular table

TABLE LOGGING * * N Y N Y Y
TABLESPACE LOGGING * * Y N N Y Y
TABLESPACE FORCE LOGGING * Y N N N N N
DATABASE LOGGING Y Y Y Y Y Y Y
DATABASE FORCE LOGGING Y N N N N N N
Amount of redo for INSERT APPEND 501000K 501000K 457K 501000K 456K 501000K 501000K
Amount of redo for Standard INSERT AS SELECT 501000K 501000K 501000K 501000K 501000K 501000K 501000K

Amount of redo for temporary tables

Standard INSERT AS SELECT INSERT APPEND value
Transaction Temp Table 110K 0.3K
Session Temp Table 110K 0.3K

Hope all above have sense and can be used for good

P.S. The “redo size” values has been got from AUTOTRACE statistics.

Application Code Level script

With patches and fixes delivered to different sites and applying to different testing, verification and production regions, there is a need for a simple way to compare database objects in the application schema without having database link between different databases. This can be challenging as the PL/SQL code is usually encrypted and database segments have different storage attributes. Here I show the method to achieve this comparison.
The reason why I think it would be interesting to public is that the problem is very common and up to day. There is no common white paper that helps to solve it. The only note that pretends to cover the topic is 781719.1 But it speaks only about PL/SQL objects that are not enough in a real world.
The attached script use MD5 hashing algorithm to get hash value associated to the version of database object. The hashing CLOB can be get from DBMS_METADATA package or from DBA_SOURCE view for PL/SQL objects. The principals are very simple but there are few points that have been raised during development.
1. The tables, indexes and e.t.c. with different storage configuration should generate the same hash. This issue has been solved by DBMS_METADATA.SET_TRANSFORM_PARAM
2. The tables with different order of columns and constraints have to provide the same hash too. The solution was to use Simple XML format of DBMS_METADATA output and sorting the columns and constraints alphabetically using XML functions.
3. The nested and index organise service tables are included into DBA_OBJECTS as TABLES but DBMS_METADATA include their description into the master objects but failed on servant items. My solution was to use DBA_TABLES view to segregate only high level objects.
4. Java objects has no option to generate description in XML format thus we generate it in general DDL format.
5. Role description should include grants that have been granted to the role. The DBMS_METADA does not include grants into generated description thus again the XML functions was used to get proper CLOB

The following privileges required for the user who run the script

-- GRANT CREATE PROEDURE
-- GRANT EXECUTE ON DBMS_CRYPTO
-- GRANT EXECUTE ON DBMS_METADATE
-- GRANT SELECT ON DBA_SOURCE
-- GRANT SELECT ON DBA_SYS_PRIVS
-- GRANT SELECT ON DBA_TAB_PRIVS
-- GRANT SELECT ON DBA_ROLE_PRIVS

The following script take one parameter – the application schema name and provide as output list of database objects with corresponding hash values.

create or replace function verify_source(p_source_type in varchar2,
p_source_name in varchar2,
p_source_owner in varchar2) return varchar2 AUTHID CURRENT_USER as
code_source clob;
md5hash varchar2(32);
v_h NUMBER; -- handle returned by OPEN
v_th NUMBER; -- handle returned by ADD_TRANSFORM
begin
IF p_source_type in ('VIEW','ROLE','TABLE','INDEX'
,'MATERIALIZED_VIEW','MATERIALIZED_VIEW_LOG','SEQUENCE','SYNONYM') THEN
v_h := DBMS_METADATA.OPEN(p_source_type);
if p_source_type not in ('ROLE') THEN
DBMS_METADATA.SET_FILTER(v_h,'SCHEMA',p_source_owner);
END IF;
DBMS_METADATA.SET_FILTER(v_h,'NAME',p_source_name);
v_th := DBMS_METADATA.ADD_TRANSFORM(v_h,'SXML');
if p_source_type not in ('VIEW','ROLE') THEN
DBMS_METADATA.SET_TRANSFORM_PARAM (v_th,'SEGMENT_ATTRIBUTES',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM (v_th,'STORAGE',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM (v_th,'TABLESPACE',FALSE);
DBMS_METADATA.SET_TRANSFORM_PARAM (v_th,'PARTITIONING',FALSE);
END IF;
code_source := DBMS_METADATA.FETCH_CLOB(v_h);
IF p_source_type in ('TABLE') THEN
-- get rid off sorting misconfigurations
SELECT UPDATEXML(
UPDATEXML(SRC,'/TABLE/RELATIONAL_TABLE/COL_LIST',
XMLELEMENT("COL_LIST", (
XMLQuery (
'for $i in /TABLE/RELATIONAL_TABLE/COL_LIST/COL_LIST_ITEM
order by $i/NAME
return $i'
passing by value SRC
RETURNING CONTENT
)
) )
) ,'/TABLE/RELATIONAL_TABLE/FOREIGN_KEY_CONSTRAINT_LIST',
XMLELEMENT("FOREIGN_KEY_CONSTRAINT_LIST", (
XMLQuery (
'for $i in
/TABLE/RELATIONAL_TABLE/FOREIGN_KEY_CONSTRAINT_LIST/FOREIGN_KEY_CONSTRAINT_LIST_ITEM
order by $i/NAME
return $i'
passing by value SRC
RETURNING CONTENT
)
) )
).getClobVal() INTO code_source
FROM ( SELECT XMLQuery(
'declare function local:removeNS($e as element()) as element()
{
element { QName("", local-name($e)) }
{
for $i in $e/node()|$e/attribute::*
return typeswitch ($i)
case element() return local:removeNS($i)
default return $i
}
}; (::)
local:removeNS($SRC/child::*)'
passing XMLType(code_source) as "SRC"
returning content
) SRC
FROM dual) INITSRC;
END IF;
DBMS_METADATA.CLOSE(v_h);
ELSIF p_source_type in ('PROCEDURE','FUNCTION','TYPE','TYPE BODY','TRIGGER','PACKAGE','PACKAGE BODY')
THEN
code_source := '';
for source_record in (select text from dba_source where owner = upper(p_source_owner) and name =
upper(p_source_name) and type = upper(p_source_type) order by line)
loop
code_source := code_source||source_record.text;
end loop;
ELSIF p_source_type in ('JAVA_CLASS','JAVA_SOURCE') THEN
code_source := DBMS_METADATA.GET_DDL(p_source_type,p_source_name,upper(p_source_owner));
ELSIF p_source_type in ('ROLE') THEN
SELECT
INSERTCHILDXMLAFTER(SRC,'/ROLE','TYPE[1]',
(SELECT
XMLAgg(
XMLELEMENT("privs",
XMLFOREST(OWNER, TABLE_NAME, PRIVILEGE)))
FROM
(SELECT DISTINCT *
FROM (
SELECT OWNER, TABLE_NAME, PRIVILEGE
FROM dba_tab_privs
WHERE GRANTEE IN
(SELECT GRANTED_ROLE
FROM DBA_ROLE_PRIVS
START WITH GRANTEE=p_source_name
CONNECT BY NOCYCLE PRIOR GRANTED_ROLE=GRANTEE
UNION ALL
SELECT p_source_name FROM DUAL)
UNION ALL
SELECT NULL OWNER, PRIVILEGE TABLE_NAME, NULL PRIVILEGE
FROM dba_sys_privs
WHERE GRANTEE IN (SELECT GRANTED_ROLE
FROM DBA_ROLE_PRIVS
START WITH GRANTEE=p_source_name
CONNECT BY NOCYCLE PRIOR GRANTED_ROLE=GRANTEE
UNION ALL
SELECT p_source_name FROM DUAL))
ORDER BY 1,2,3))).getClobVal() INTO code_source
FROM ( SELECT XMLQuery(
'declare function local:removeNS($e as element()) as element()
{
element { QName("", local-name($e)) }
{
for $i in $e/node()|$e/attribute::*
return typeswitch ($i)
case element() return local:removeNS($i)
default return $i
}
}; (::)
local:removeNS($SRC/child::*)'
passing XMLType(code_source) as "SRC"
returning content
) SRC
FROM dual) INITSRC;
END IF;

md5hash := rawtohex(dbms_crypto.hash(typ => dbms_crypto.HASH_MD5,
src => code_source));
return md5hash;
exception
WHEN OTHERS THEN
return p_source_type||p_source_name;
end;
/
show error

Spool code_level_checksum.lst
set pagesize 5000
set heading off
set echo off
set feedback off

col OBJECT_TYPE FORMAT a30
col OBJECT_NAME FORMAT a30
col HASHVAL FORMAT a35

prompt ###################################
prompt # TABLES #
prompt ###################################

SELECT 'TABLE' OBJECT_TYPE, TABLE_NAME OBJECT_NAME , verify_source('TABLE',TABLE_NAME, OWNER)
HASHVAL
FROM DBA_TABLES WHERE OWNER='&1'
AND IOT_NAME IS NULL
order by 1,2
/

prompt ###################################
prompt # INDEXES #
prompt ###################################

SELECT 'INDEX' OBJECT_TYPE, INDEX_NAME OBJECT_NAME , verify_source('INDEX',INDEX_NAME, OWNER)
HASHVAL
FROM DBA_INDEXES WHERE OWNER='&1'
order by 1,2
/

prompt ###################################
prompt # ROLES #
prompt ###################################

SELECT 'ROLE' OBJECT_TYPE, ROLE OBJECT_NAME , verify_source('ROLE',ROLE, 'TCTDBS') HASHVAL
FROM DBA_ROLES
order by 1,2
/

prompt ###################################
prompt # JAVA SOURCES #
prompt ###################################

select OBJECT_TYPE, OBJECT_NAME, verify_source('JAVA_SOURCE',OBJECT_NAME, OWNER) HASHVAL FROM
dba_objects
WHERE OWNER='&1' and OBJECT_TYPE IN ('JAVA SOURCE')
order by 1,2
/

prompt ###################################
prompt # JAVA CLASSES #
prompt ###################################

select 'JAVA_CLASS' OBJECT_TYPE, NAME OBJECT_NAME, verify_source('JAVA_CLASS',NAME, OWNER) HASHVAL
FROM dba_java_classes
WHERE OWNER='&1'
order by 1,2
/

prompt ###################################
prompt # MISC #
prompt ###################################

select OBJECT_TYPE, OBJECT_NAME, verify_source(OBJECT_TYPE,OBJECT_NAME, OWNER) HASHVAL FROM
dba_objects
WHERE OWNER='&1' and OBJECT_TYPE IN ('PROCEDURE','FUNCTION','TYPE','TYPE
BODY','TRIGGER','VIEW','MATERIALIZED_VIEW','MATERIALIZED_VIEW_LOG','SEQUENCE','SYNONYM','PACKAGE','PACKAG
E BODY')
order by 1,2
/

spool off;

drop function verify_source
/

Hope you will find it usefull.

The output from this script can be compared as simple flat file using any favorite comparison tool, e.g. WinMerge.

TO_CHAR or NOT TO_CHAR

Another post where I probably show trivial things. This issue is very easy to diagnose and fix but quite complicate to prevent.
I’ll start from my personal statistics of performance analyses during development. From my point of view there are 5 levels of SQL tuning that are using in code development. I try to present percent of queries that tuned by each of them.

N Method % of statement tested
1 No test. Rely on optimizer and experience. 50
2 Execute on developer database. 25
3 Execute on a database with real size data. 15
4 Execute on database with real size data and analyse the execution plan. 10
5 Execute on database with real size data and analyse the execution plan and predicates. 0.0000001

It shows that usually we usually start intensively tune queries only when it does not work on a real production or preproduction stage. It have sense but some hidden rocks can and should be diagnosed during using the first level of tuning. Based on my own experience the own expertise can be easily overestimated. Before I start this post I query some my friends and no one was aware this oracle trick, thus I believe that it would be useful for general audience too.
Will you check the predicates if the access method is expected and use the correct index. Probably not but oracle loves to make surprises. In our case it was surprise with concatenated indexes and implicit conversion.
I’ll try to show the case using the example

--Create sample table
CREATE TABLE eterpani.TESTTABLE
   (    COL1 NUMBER(10,0) DEFAULT (0) NOT NULL ENABLE,
        ID NUMBER(10,0) NOT NULL ENABLE,
        PARTITIONKEY CHAR(1) NOT NULL ENABLE,
        COL2 NUMBER(10,0),
        COL3 VARCHAR2(6 CHAR) NOT NULL ENABLE,
        COL4 DATE NOT NULL ENABLE,
        COL5 VARCHAR2(3 CHAR) NOT NULL ENABLE,
        AMOUNT NUMBER(16,3) DEFAULT (0) NOT NULL ENABLE,
         CONSTRAINT TESTTABLE_PK PRIMARY KEY (ID,PARTITIONKEY) USING INDEX )
/
--Put concatenated index on it where col2 column is in the middle
CREATE INDEX eterpani.TESTTABLE_IDX1 ON eterpani.TESTTABLE 
   (PARTITIONKEY,  COL4, COL5, COL2, COL3)
/
--Populate table by some data
INSERT INTO eterpani.TESTTABLE
SELECT COL1, ID, PARTITIONKEY, COL2, COL3, COL4,COL5,AMOUNT FROM dual
   MODEL DIMENSION by (65 i)
   MEASURES (0 COL1,
             0 ID,
             CAST(' ' as CHAR(1)) PARTITIONKEY,
             0 COL2,
             CAST(' ' as CHAR(25)) COL3,
             sysdate COL4,
             CAST(' ' as CHAR(3)) COL5,
             0 AMOUNT)
     (col1[for i from 1 to 100000 increment 1] = ROUND(DBMS_RANDOM.VALUE(100,150)),
      ID[for i from 1 to 100000 increment 1] = cv(i),
      PARTITIONKEY[for i from 1 to 100000 increment 1] = CHR(65+MOD(cv(i),26)),
      col2[for i from 1 to 100000 increment 1] = ROUND(DBMS_RANDOM.VALUE(10,12)),
      col3[for i from 1 to 100000 increment 1] = PARTITIONKEY[cv(i)]||DBMS_RANDOM.STRING('U',5),
      col4[for i from 1 to 100000 increment 1] = TRUNC(SYSDATE)+ROUND(DBMS_RANDOM.VALUE(-5,5)),
      col5[for i from 1 to 100000 increment 1] = TO_CHAR(ROUND(DBMS_RANDOM.VALUE(10,999))),
      AMOUNT[for i from 1 to 100000 increment 1] = ROUND(DBMS_RANDOM.VALUE(100,1000)))
/
COMMIT
/

The preparation steps are over. Lets run the query.

Set autotrace on
SELECT PARTITIONKEY, COL4, COL3, COL2, COL5, SUM(AMOUNT) SUMAMOUNT
FROM eterpani.TESTTABLE
WHERE PARTITIONKEY = 'C' AND COL3='COWRTE'
     AND COL4 = TRUNC(SYSDATE) AND COL2 = 11
     AND COL5 = 901 AND COL1=131
GROUP BY PARTITIONKEY,  COL4, COL5, COL2, COL3;

Let’s look on execution plan

Execution Plan
----------------------------------------------------------
Plan hash value: 3717891987

-------------------------------------------------------------------------------------------
|Id| Operation                   | Name           | Rows  | Bytes | Cost (%CPU)|  Time    |
-------------------------------------------------------------------------------------------
| 0|SELECT STATEMENT             |                |   1   |  59   |  4   (0)   | 00:00:01 |
| 1| SORT GROUP BY NOSORT        |                |   1   |  59   |  4   (0)   | 00:00:01 |
|*2|  TABLE ACCESS BY INDEX ROWID| TESTTABLE      |   1   |  59   |  4   (0)   | 00:00:01 |
|*3|   INDEX RANGE SCAN          | TESTTABLE_IDX1 |   1   |       |  4   (0)   | 00:00:01 |

If someone had showed me this execution plan few months ago I would have said that it is the good one without any doubts. But let me make a small changes in a query – put explicit conversion to COL5 predicate.

Set autotrace on
SELECT PARTITIONKEY, MAX(COL1), COL4, COL3, COL2, COL5, SUM(AMOUNT) SUMAMOUNT
FROM eterpani.TESTTABLE
WHERE PARTITIONKEY = 'C' AND COL3='COWRTE'
     AND COL4 = TRUNC(SYSDATE)  AND COL2 = 11
     AND COL5 = TO_CHAR(901) AND COL1=131
GROUP BY PARTITIONKEY,  COL4, COL5, COL2, COL3;

As result we have the same execution plan. Even the “Plan hash value” is the same. At this stage you can say no difference no reasons to bother. But difference exists. Firstly pay attention to the bottom part of the autotrace output

Without TO_CHAR Statistics
———————————————————-
0 recursive calls
0 db block gets
9 consistent gets
0 physical reads
0 redo size
675 bytes sent via SQL*Net to client
513 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
With TO_CHAR Statistics
———————————————————-
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
746 bytes sent via SQL*Net to client
513 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed

Does the triple number of read blocks confuse you? The answer on this confusion is in the middle part of autotrace output

Without TO_CHAR Predicate Information (identified by operation id):
—————————————————

2 – filter(“COL1″=131)
3 – access(“PARTITIONKEY”=’C’ AND “COL4″=TRUNC(SYSDATE@!) AND “COL2″=11 AND “COL3″=’COWRTE’)
filter(“COL3″=’COWRTE’ AND “COL2″=11 AND TO_NUMBER(“COL5”)=901)

With TO_CHAR Predicate Information (identified by operation id):
—————————————————

2 – filter(“COL1″=131)
3 – access(“PARTITIONKEY”=’C’ AND “COL4″=TRUNC(SYSDATE@!) AND “COL5″=’901′ AND “COL2″=11 AND “COL3″=’COWRTE’)

As you understand the most selective predicate in this query is “COL3=’COW131’”. This predicate use indexed column and mentioned in corresponding section thus everything should be easy and strait forward. But there is one small difference. Oracle implicitly convert all values in COL5 to number. Why Oracle is doing it gods know. But it prevent usage of COL5 in access path. Even without explicit conversion we still use the index but read more blocks.
I have tried to describe this difference in reading on a picture
index block reading schema
All index blocks with the same PARTITIONKEY and COL4 values would be reviewed in a process of searching the relevant records. It is just index blocks reading but it is additional read operation that can dramatically increase the time query depending on cardinality of COL5. Moreover we have extra CPU waste. Instead of one conversion operation from constant to char we convert each single COL5 value to number. Believe me it is not we all expected on our databases.
There are two conclusions on above information:
firstly – be careful when looking on execution plan, check not only plan but predicates and other bits and pieces
Secondly – implicit thing have tendency to work unpredictably. Convert explicitly where you can.
Usefull doc: ORA-01722: Invalid Number; Implicit Conversion Not Working Correctly [ID 743595.1]

Watch and Wait

Over the last 6 months I’ve become aware of two (new to me) commands (Kevin Closson pointed out that wait is actually a shell “builtin” command as opposed to any other type of command ;-)) that have been very useful: watch and wait. If you are not already familiar with them then I recommend you have a read, have a play and wait patiently for the first opportunity to use them.

There is no point in quoting the man pages here, so I’ll just give quick examples.

watch

In its simplest form: watch <command>

For example:

$ watch date

This displays the output below on screen, refreshing every 2 seconds (as stated in the top left):

Every 2.0s: date                                                     Sun Jul 24 21:15:37 2011

Sun 24 Jul 2011 21:15:37 BST

The example is clearly not very useful, but it demonstrates the functionality. The command I’ve been using most with watch is crsctl stat res ora.<db_name>.db -t , which is really nice for monitoring the state of the 8 instance RAC database running on your Exadata system[1] during patching or other activity that does more than its fair share of stopping and starting instances.

wait

My introduction to wait was over a few beers. I’d asked a friend how he was monitoring for the completion of multiple child processes run concurrently as part of something he’d previously described. It turned out he wasn’t monitoring for completion in a script, because he didn’t need to… However, for one of the scripts I’m currently working on I needed to wait for all the child processes to exit (and check their exit status) before proceeding. Fortunately another member of the group said:

Sounds like what you need is wait.

I’d never heard of wait, but it certainly sounded promising! I put together a very simple script to test that my interpretation of the man page was correct:

#!/bin/bash

myCommand () {
 echo "I am command $1, I will be \"working\" for $2 seconds and my exit status will be $3."
 sleep $2
 exit $3
}

echo The time is: $(date +%T)
myCommand 1 20 $1 &
myCommand 2 10 $2 &
echo The time is: $(date +%T)
echo "Waiting..."
wait %1
j1_exit=$?
wait %2
j2_exit=$?
if [[ $j1_exit -ne 0 ]] || [[ $j2_exit -ne 0 ]]; then
 echo "At least one of the child processes didn't finish... Exiting!"
 echo "Child process 1 exited with $j1_exit"
 echo "Child process 2 exited with $j2_exit"
else
 echo "Completed with no errors from child processes."
fi
echo The time is: $(date +%T)

Running the command with a selection of input parameters to control the exit status of the child processes:

martin$ ./wait.sh 0 0
The time is: 21:19:04
I am command 1, I will be "working" for 20 seconds and my exit status will be 0.
I am command 2, I will be "working" for 10 seconds and my exit status will be 0.
The time is: 21:19:04
Waiting...
Completed with no errors from child processes.
The time is: 21:19:24
martin$ ./wait.sh 1 0
The time is: 21:19:27
I am command 1, I will be "working" for 20 seconds and my exit status will be 1.
I am command 2, I will be "working" for 10 seconds and my exit status will be 0.
The time is: 21:19:27
Waiting...
At least one of the child processes didn't finish... Exiting!
Child process 1 exited with 1
Child process 2 exited with 0
The time is: 21:19:47
martin$ ./wait.sh 0 1
The time is: 21:19:52
I am command 1, I will be "working" for 20 seconds and my exit status will be 0.
I am command 2, I will be "working" for 10 seconds and my exit status will be 1.
The time is: 21:19:52
Waiting...
At least one of the child processes didn't finish... Exiting!
Child process 1 exited with 0
Child process 2 exited with 1
The time is: 21:20:12
martin$ ./wait.sh 1 1
The time is: 21:20:16
I am command 1, I will be "working" for 20 seconds and my exit status will be 1.
I am command 2, I will be "working" for 10 seconds and my exit status will be 1.
The time is: 21:20:16
Waiting...
At least one of the child processes didn't finish... Exiting!
Child process 1 exited with 1
Child process 2 exited with 1
The time is: 21:20:36

I’m sure it seems that I’m easily pleased, but this gives me the functionality to easily create multi-threaded shell scripts, which isn’t something I’ve needed, or at least realised would be useful to me, in the past.
_________________________________________________
1. Other multi-instance database platforms are available 🙂

FGAC and new partition features

It is better to start with minor note then to wait for long without any. Thus my first post in this blog would be about very specific problem.

I have spent some time investigating it before get to the bottom of this miscommunication between SQL statement improving and Fine-Grain Access Control (FGAC). Thus I believe it can be interesting for audience too. Below is the test case that allows to easily reproduce the issue.

Create partitioned table with non-standard partition names.

CREATE TABLE TESTONFGAC
( ID NUMBER(5,0) NOT NULL ENABLE,
PartitionKey NUMBER(5,0) DEFAULT 0 NOT NULL ENABLE,
COMENT VARCHAR2(255)
) TABLESPACE "USERS"
PARTITION BY RANGE (PartitionKey)
(PARTITION "1" VALUES LESS THAN (1) SEGMENT CREATION IMMEDIATE TABLESPACE "USERS" ,
PARTITION "2" VALUES LESS THAN (2) SEGMENT CREATION IMMEDIATE TABLESPACE "USERS" ,
PARTITION "3" VALUES LESS THAN (3) SEGMENT CREATION IMMEDIATE TABLESPACE "USERS" )
/

Create FGAC policy function. For our case it does not matter how complicate would it be, thus create just dummy one.

CREATE OR REPLACE FUNCTION TEST_FGA (object_schema IN VARCHAR2, object_name VARCHAR2) RETURN VARCHAR2
IS
BEGIN
RETURN '1=1';
END;
/

Create the policy on the newly created table using the function

exec DBMS_RLS.ADD_POLICY ( 'ETERPANI', 'TESTONFGAC', 'test_privacy', 'ETERPANI', 'TEST_FGA');

Everything before was just preparations. Now we are running the relatively new (since Oracle 10g) syntaxes.

DELETE FROM ETERPANI.TESTONFGAC PARTITION ("3") WHERE ID=3;

This reasonable simple statement fails with error “ORA-28113: policy predicate has error”. The further diagnostic can be found on oracle trace file.

*** 2011-06-25 01:23:27.188
-------------------------------------------------------------
Error information for ORA-28113:
Logon user : ETERPANI
Table/View : ETERPANI.TESTONFGAC
Policy name : TEST_PRIVACY
Policy function: ETERPANI.TEST_FGA
RLS view :
SELECT "ID","PARTITIONKEY","COMENT" FROM "ETERPANI"."TESTONFGAC" PARTITION (3) "TESTONFGAC" WHERE (1=1)
ORA-14108: illegal partition-extended table name syntax
-------------------------------------------------------------

As we see double quotes around partition name magically disappears.
In case of STANDARD partition naming (e.g. FIRST , SECOND e.t.c) it does not matter but if you choose lowercase naming or names starting from numbers you can get the ORA-28113.

Fancy automatic database schema generation tools frequently create objects in lower-case in this case it can be fatal.

The expected conclusion “little wit in the head makes much work for the feet” – keep your solution standard and simple (if you can).

I Can’t Spell

Despite my efforts to come up with a good name for this blog I messed up! I wanted something with ORA or Oracle in the URL and thought of oraganism… However, when I signed up for WordPress I made a typo and placed an unwanted U in the username, giving me oragansium. Tut, tut, tut!

Anyway, the very efficient support team at WordPress.com have sorted it out for me and I’ve now got the URL I originally wanted.