INS-30502: No ASM disk group found

I’ve been mostly looking busy Informix instances lately so it was nice to have a 12c installation to do at work today. I quickly reacquainted myself of some of my personal rules of Oracle problem-solving:

  1. There is a lot of rubbish posted in online Oracle forums.
  2. Using MOS is generally a better bet.

However, both of these can be lazy ways of solving problems and neither provided the answer to the problem I had. There’s often no substitute for working through a problem yourself.

My 12c instance is a stand-alone one including Grid Infrastructure, ASM and role separation. This means installing Grid Infrastructure as user grid and the database software as user oracle. Having successfully installed Grid Infrastructure and browsed around my working ASM instance using SQL*Plus, I was slightly perturbed to get this error during the installation of the database software when it came to configuring the storage to be used:

INS-30502: No ASM disk group found

In the installActions log I had:

SEVERE: [FATAL] [INS-30502] No ASM disk group found.
CAUSE: There were no disk groups managed by the ASM instance +ASM.
ACTION: Use Automatic Storage Management Configuration Assistant to add disk groups.

But this wasn’t true!

[grid@oraserver ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid
[grid@oraserver ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 10 11:28:49 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Automatic Storage Management option

SQL> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
FRA MOUNTED
DATA MOUNTED

So what was going on? I vageuly remembered that there were some old bugs related to role-separation so I checked the ownership of key files and the group memberships of the oracle and grid users. Everything was ok!

There is a MOS note, 1069517.1, that offers some suggestions but none of the solutions fit, in particular the recommendation to check the permissions on $ORACLE_HOME/bin/oracle. Such a file doesn’t exist yet because I haven’t installed any database software!

Despite this, the problem did smell of a permissions issue. As the installer runs as user oracle, I decided to have a poke around as this user and found this whilst connected to my ASM instance:

oracle@oraserver ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 10 10:47:55 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Automatic Storage Management option

SQL> select * from v$asm_diskgroup;

no rows selected

Is this the same ASM instance as the one above with its two disk groups? Isn’t it only with user_ views or VPD where the output depends on the user?

Here the documentation comes to the rescue:

While an ASM instance is initialized, ASM discovers and examines the contents of all of the disks that are in the paths that you designated with values in the ASM_DISKSTRING initialization parameter. Disk discovery also occurs when you:

  • Run the ALTER DISKGROUP…ADD DISK and ALTER DISKGROUP…RESIZE DISK commands
  • Query the V$ASM_DISKGROUP and V$ASM_DISK views

So the v$asm_diskgroup view actually gets the OS to do something under the covers.

At which point the problem became somewhat obvious:

SQL> show parameter diskstring

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string /dev/asm/

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Automatic Storage Management option

On this system, /dev/asm contains the block devices set up in udev. A permission check revealed all:

[oracle@oraserver ~]$ cd /dev/asm
-bash: cd: /dev/asm: Permission denied

The directory could only be viewed by the grid user and members of the asmadmin group. A quick chmod 775 on the directory resolved the problem.

How Eter Pani became Oracle Certified Master

I got my OCM. Hurray!
Now one of the common question people ask me is how to pass this OCM exam.
Firstly I am very grateful to Oracle to introduce this exam. Finally it is the real exam, but not test with set of answers. I clearly see complicatedness for examiners, but it is the real proof of DBA skills, not general memory skills.
Secondly this exam does not prove your exceptional knowledge of Oracle. It is the proof that you are fluent in all basic skills. During the exam everything works according to documentation. Personally I have collection of my favourite Metalink Notes and Advanced instructions that are used at least once a week. You do not need such things on exam.
The environment is ready. You do not need to reconfigure switches for RAC or install additional OS packages. What you are doing is only Oracle Database Administration, no OS specific. If something is really different between different UNIX OS, forget about it. It should not be part of the exam.
When I came to client site I frequently have no access to internet thus have a copy of Oracle Documentation locally. Moreover reading through local copy is actually faster then browsing through internet copy. I used to use local copy and it was really useful during exam.
Another my habit that I find useful is preparing all scripts in text file and then copy it to SQL*Plus or Shell window. If you need to rerun script or slightly alter it for a different skill-set you can reuse your one history. E.g. I store in this file backup/restore scripts.
You have 2 days, 14 hours, including lunch and breaks for 7 skill-sets. None of skill-set takes more then 2 hours. If you do not believe you can do something in less then 2 hours forget about it. Even if it would be on exam you would not be able to do it in time. Focus on things that you would be able to do.
The exam is based on 11.2g database. If something is different between patch sets again forget about it. Asking information specific for patch set is unfair to people who used to basic one, thus this question would not arrive on exam.
When you read through skill-set task at the beginning, read it up to the end. Mark for yourself tasks that would require you some investigation through the documentation. Mark for yourself tasks that you doubt to solve. Estimate time for each task. Start from the short and easy one and if you significantly overflow the time frame you set switch to the next task in your ordered list. If you have time you can came back and fix the issues later.
I recommend 15 minutes before end of skill-set to check the whole environment, there is special button for end state of the skill-set. 15 minutes should be enough to bring it to correct state.
Read tasks carefully, frequently tasks include markers how to avoid hidden rocks of the environment, e.g. check all options of the objects to create. If you would not follow it exactly the problems would make your life significantly harder.
Some tasks are not clear, you can ask your proctor for clarification. But proctor not always can rephrase task without violation of exam rules, if he could not provide explanation what is requested in a task follow “best practice”.
In general be concentrated, careful and have a lot of practice before exam. I passed preparation courses but honestly it was just way to guarantee time and environment for training. You can do preparation yourself if your management and family would grant the opportunity to do it. If you have no such generous option apply for a preparation course, it is really value for money, very useful and informative. Course provide to you experience of working on the same machines that you would use on exam. In my case the machines was really physically the same, just different OS image. BTW try to became used to local keyboards of the country where you are going to pass the exam. English and US keyboards are different and this difference can be that point which consume the vital time on exam.
Good Luck.
Eter

Checking Oracle Directory Object Path Validity

Bit of a mouthful that title. Today I was asked to check which, if any, directory objects had an invalid path. Normally this would be trivial but on the system I was on there were more than 90. So I knocked up a quick bit of SQL/KSH to do it for me and thought I’d share it here. I could have done something with PL/SQL but I was happy with quick and dirty.

In summary the snippet below generates shell test commands to a temporary file and then uses “sed” to chop off the first and last lines before running the resulting shell script. I just pasted this in to a putty window SQL*Plus prompt – your mileage may vary.


set define off heading off pages 0 lines 200 trimspool on feedback off echo off
spool /tmp/dirchk.tmp
select '[[ -d '||directory_path||' ]] || echo "'||directory_name||' missing ('||directory_path||')"' from dba_directories order by 1;
spool off

!ls -l /tmp/dirchk.tmp
!sed -e '1d;$d' /tmp/dirchk.tmp > /tmp/dirchk.sh
!chmod 700 /tmp/dirchk.sh
!/tmp/dirchk.sh

On Linux I could have used the sed “-i” switch the edit in place but this was AIX.

Example output:

NJTEMP missing (/home/nj)
RESPONSE_DIR missing (/mnt/app/files/response)
SCHEMA_DIR missing (/mnt/app/schema)
CONVERSION missing (/mnt/app/conversion)
FTP_DIR missing (/mnt/app/ftp_files)

Oracle Dictionary fragmentation

The purpose of this post is mainly to highlight the performance degradation due to dictionary index fragmentation. It is something that oracle not widely announce but it came from the physical structure of the database.

Oracle databases have the AGE and the age mainly came from the number of DDL operations done on the database. The DDL operations modify the dictionary and introduce fragmentation to the indexes and tables.

I have made the small test case

-- CREATE TABLE
DROP TABLE list_customers
/
CREATE TABLE list_customers
   ( customer_id             NUMBER(6)
   , cust_first_name         VARCHAR2(20)
   , cust_last_name          VARCHAR2(20)
   , nls_territory           VARCHAR2(30)
   , cust_email              VARCHAR2(30))
   PARTITION BY LIST (nls_territory) (
   PARTITION asia VALUES ('CHINA', 'THAILAND'),
   PARTITION europe VALUES ('GERMANY', 'ITALY', 'SWITZERLAND'),
   PARTITION west VALUES ('AMERICA'),
   PARTITION east VALUES ('INDIA'))
/
-- ADD partitions
ALTER SESSION SET EVENTS '10046 trace name context forever, level 4'
/
ALTER TABLE list_customers ADD PARTITION south VALUES ('ANTARCTICA')
/
EXIT
-- DROP partition
ALTER SESSION SET EVENTS '10046 trace name context forever, level 4'
/
ALTER TABLE list_customers DROP PARTITION south
/
EXIT

It is oversimplified method without dependancies to the object and object statistics. But it already create two massive traces.
In summary during the INSERT command we insert to tables
OBJ$, DEFERRED_STG$ and TABPART$
During delete operation we remove rows from this tables
As you all know insert-delete tables have high level of fragmentation on the indexed columns

We run the standard report for indexes on 3 mentioned tables and it shows that estimated number of leaf blocks for some of them dramatically smaller then the actual one.

INDEX Estimated Size Actual Size
SYS.TABPART$.I_TABPART_OBJ$ 47 279
SYS.OBJ$.I_OBJ3 4 20
SYS.OBJ$.I_OBJ4 422 1475
SYS.OBJ$.I_OBJ1 422 969
SYS.TABPART$.I_TABPART_BOPART$ 68 125
SYS.DEFERRED_STG$.I_DEFERRED_STG1 30 53
SYS.OBJ$.I_OBJ5 1269 1728
SYS.OBJ$.I_OBJ2 1269 1726

In case you would try to rebuild this indexes in usual way, you would get the oracle error

ORA-00701: object necessary for warmstarting database cannot be altered

that actually block all attempts to fix the fragmentation.

Index fragmentation primarelly affect index FULL and RANGE scans operation but not UNIQUE index scan. UNIQUE scan would be affected only when INDEX would grow for additional level.

The number in a table does not show something dramatic but it looks like we already have mensurable performance impact on common database operations, like name resolution.

In long term I think every database with high number of structure modifications has to go through process of APPLICATION DATA migration regularly once in 5-15 years.

UKOUG Database Server SIG (Leeds 2013)

On Thursday I attended the UKOUG Database Server SIG in Leeds. All slides have been uploaded to the UKOUG website.

https://www.ukoug.org/events/ukoug-database-server-sig-meeting-may-2013/

It’s the first SIG I’ve attended this year and after enjoying it so much I ought to try harder to get to other events. We had Oak Table presence from David Kurtz talking all things partitioning/compression and purging, two Oracle employees discussing first support and then ZFS/NetApp (I particularly enjoyed this one) and then Edgars Rudans on his evolution of the Exadata Minimal Patching process. All of these presentations are well worth downloading and checking out.

The last presentation of the day was me. I’ve never presented before and it took a big step out of my comfort zone to get up there but I’m so glad I did. I would recommend it to anyone currently spending time in the audience thinking “I wish I had the confidence to do that”. It’s nerve racking beforehand but exhilarating afterwards.

When blogging in the past I’ve liked how it makes you think a little bit harder before pressing the publish button. I think the best thing I got out of the whole presentation process was that it made me dig even deeper to make sure I’d done my homework.

After the SIG there was a good group headed out for Leeds Oracle Beers #3 which involved local beer, good burgers and Morris Dancing, all good fun.

Monitoring real-time apply progress

In many cases monitoring Data Guard, at least in Maximum Performance mode, is down to checking transport and apply lag to make sure standby database is not “too far” behind primary, and quite often it is sufficient in a day to day operation. By checking v$archived_log we can easily get information about the last archived and applied log sequence number:

SQL> select sequence#,ARCHIVED,APPLIED from v$archived_log order by sequence#;
SEQUENCE# ARCHIVED  APPLIED
---------- --------- ---------
… output truncated …
       208 YES       YES
       209 YES       YES
       210 YES       YES
       211 YES       IN-MEMORY

If we use real-time apply changes are applied to the standby database as soon as they are written to the standby redo logs, which should keep standby database as up to date as possibly without going to Maximum Availability or Maximum Protection.

However, if our primary database fails, how can we check what is the last change transmitted from the primary, or that all changes transmitted were actually applied before we activate standby?
Of course we can trust that Oracle will apply all the transmitted redo, when standby is activated, but without knowing what was the last transmitted change we have no way of verifying it. It is also possible that the failover decision depends on how far behind the primary the standby was at the time of failure or how much data can potentially be lost.

One way of finding the information we need is by checking v$standby_log and v$recovery_progress views.
v$standby_log displays information about standby redo logs. Columns LAST_CHANGE# and LAST_TIME can be used to find last changes transmitted from primary.

SQL> select GROUP#,THREAD#,SEQUENCE#,STATUS,LAST_CHANGE#,LAST_TIME from v$standby_log;

    GROUP#    THREAD#  SEQUENCE# STATUS           LAST_CHANGE# LAST_TIME
---------- ---------- ---------- ---------- ------------------ -------------------
         4          1        213 ACTIVE                1699715 2013-04-18 20:26:15
         5          1          0 UNASSIGNED
         6          1          0 UNASSIGNED
         7          1          0 UNASSIGNED

SQL>

v$recovery_progress can be used to monitoring database recovery operations, and gives us access to information like: Last Applied Redo, Active Apply Rate, Average Apply Rate, Apply Time per Log and a few more. The item we are interested in is “Last Applied Redo” and the value of the “TIMESTAMP” column. The value should be very close to or match the value of LAST_TIME column from v$standby_log view.

SQL> select START_TIME,TYPE, ITEM,UNITS,SOFAR,TIMESTAMP 
  2  from v$recovery_progress where ITEM='Last Applied Redo';
START_TIME          TYPE             ITEM                UNITS     TIMESTAMP
------------------- ---------------- ------------------- --------- -------------------
2013-04-17 22:46:26 Media Recovery   Last Applied Redo   SCN+Time  2013-04-18 20:26:16

SQL>

NOLOGGING in numbers

Hi All
I have made small investigation about redo generation. From early days of my career I was remember that nologging operation is very performance effective but never try to quantify this very.
Every application can theoretically be split into 4 groups of tables (I use my personal names but hopefully it has sense):
1) Regular tables – contain valuable information need to be stored for legal and functional purposes. Stored as normal tables.
2) Staging tables – contain process lifetime specific information, easily re-creatable. Used for transferring information between sessions and report generation. Stored as regular tables or materialized views.
3) Session Temporary tables– contain process lifetime specific information, easily re-creatable. Used for reporting stored as GLOBAL TEMPORARY tables ON COMMIT PRESERVE.
4) Transaction Temporary tables– contain process lifetime specific information, easily re-creatable. Used for processing optimisation stored as GLOBAL TEMPORARY tables ON COMMIT DELETE.
By default all 4 groups generate REDO logs records that can be significant amount of resources. The redo information is valuable if we:
1) Support StandBy database
2) Information inside tables is valuable and have to be safe in case of database crush.
To make the standby or backup completely usable after a nologging statement is run, a mechanism other than database recovery must be used to get or create current copies of the affected blocks. You have to drop and recreate the object with the invalidated blocks or truncate it, using the program that maintains the object. Thus extra step to manage switchover/failover to standby database process have to be introduced.
Again based on my understanding the only business requirements for logging is to keep data from “Regular tables”. The safety of the data from other groups is not such important.
The only DML operation that can be optimised in terms of REDO log generation is INSERT with APPEND hint. (MERGE is actually presentation layer above INSERT thus can be treated together) . Hint APPEND if it works have one negative issue. The data in new table is not actually available until end of transaction.Due to the following error.
ORA-12838: cannot read/modify an object after modifying it in parallel
It linked to the fact that oracle could not make consistent model of block if there is no UNDO information. This actually makes using this hint on Global Temporary tables with ON COMMIT DELETE rows unreasonable. You can insert data but never be able to use it until it would be deleted.
Another fact that I have to highlight UPDATE and DELETE always generate REDO information. Thus if the table intensively update the gains would be minimal. Avoiding this operation on a temporary tables is another skills that developers have to be used to for optimal performance of your application.
There are 5 parameters that actually affect SEGMENT logging: Database LOGGING, Database FORCE LOGGING, Tablespace LOGGING, Tablespace FORCE LOGGING (Can be switched on tablespaces with “Regular tables” and switched off on tablespaces with “Staging tables” , Table LOGGIN. Global Temporary tables actually always in NOLOGGING mode thus we can assume for table groups “Session Temporary tables” and “Transaction Temporary tables” always have all parameters equal to NO. Production databases should always be in protected mode thus the value DATABASE LOGGING should always be in YES, it takes value NO outside of investigation.
To test I have created the table TEST.BIGTABLE (column1 NUMBER) with 39999960 rows and few tables to generate INSERT as SELECT statement from BIGTABLE dataset. The results are below.

Regular table

TABLE LOGGING * * N Y N Y Y
TABLESPACE LOGGING * * Y N N Y Y
TABLESPACE FORCE LOGGING * Y N N N N N
DATABASE LOGGING Y Y Y Y Y Y Y
DATABASE FORCE LOGGING Y N N N N N N
Amount of redo for INSERT APPEND 501000K 501000K 457K 501000K 456K 501000K 501000K
Amount of redo for Standard INSERT AS SELECT 501000K 501000K 501000K 501000K 501000K 501000K 501000K

Amount of redo for temporary tables

Standard INSERT AS SELECT INSERT APPEND value
Transaction Temp Table 110K 0.3K
Session Temp Table 110K 0.3K

Hope all above have sense and can be used for good

P.S. The “redo size” values has been got from AUTOTRACE statistics.