Orphaned Files in ASM

There may be a better way to do achieve this, but as I have found myself using it a number of times it seems worth sharing…

The problem of orphaned files can occur with or without ASM and I’m sure most readers have found themselves identifying database files that are present at a file system or ASM level, but the database (control file) knows nothing about them. You want to clean up the debris from previous activities, but you need to be confident that your programmatic or manual verification of the file being orphaned is correct… No one wants to delete a file that should not be removed!

Before going into the steps I use for this I need to stress that the approach relies on an assumption:

ASM directory “+<DISKGROUP_NAME>/<DB_UNIQUE_NAME>” contains no database files that should be retained and belong to a database with a different DB_UNIQUE_NAME.

… and the example code snippets are based on the following further assumptions/statements:

  1. oratab has the instance name listed – Last time I checked DBCA does not take care of this for you in a RAC environment, but many companies add instance names. If you don’t then you’ll need to modify the approach to append the appropriate number to ORACLE_SID before the SQL*Plus & RMAN commands. One way to achieve this is covered in RAC/Exadata Shell Aliases, but to keep this simple I’ll leave that to those that need it. Note that you would also need to modify the approach used to run for multiple database that is given at the end of the post.
  2. Parts of the code snippets assume you’re not running more than an eight node RAC database – Actually, the approach would work on a RAC database with more than 8 nodes as long as you didn’t pick a node with instance number(s) greater than 8 to run the script on.
  3. It is not perfect/foolproof!

If you use Oracle Managed Files (OMF) for all databases that use the ASM storage then you’re very likely (but not guaranteed) to be safe to use this approach, however, I can imagine a number of ways to end up in a position where this is not going to be the case. For example, if a database has been renamed, but the database files have not been moved within ASM to match the new name. This is something I’ve previously covered in Rename Database When Using ASM.

Basically, you have been warned and are responsible for making sure this approach is safe in your environment.

To protect you (dear Googler who just wants a quick fix for your immediate problem) from making a mistake you do not appreciate you could be about to make it seems appropriate to provide the steps rather than my complete script so that you have a slightly higher barrier to deleting things you really don’t want to delete.

OK, so that is enough of my caution.

Orphaned Files for a Single Database

Getting a list of the orphaned files for database is as simple as the following (assuming the environment is already source using oraenv for the instance):

rman log=orphaned_files_${ORACLE_SID%[1-8]}_DATA.log << EOF
connect target /
catalog start with '+DATA/${ORACLE_SID%[1-8]}';
EOF

A couple of comments about this snippet:

  1. The ${ORACLE_SID%[1-8]} strips the instance number off ORACLE_SID, which under most circumstances will give you the DB_NAME or DB_UNIQUE_NAME in a RAC environment, but this is not guaranteed to be the case as some people do odd things with naming standards.
  2. If you’re not using this in a RAC environment and you have instance names that end with a digit, which seems fairly common, then replace all the ${ORACLE_SID%[1-8]} strings with ${ORACLE_SID} to avoid chopping the end off their names.
  3. Diskgroup name is hard coded in this example, but it’s pretty trivial to come up with a way of obtaining the diskgroups and one approach is given later in this post.

Generating the File Deletion Script

Once the “orphaned_files_*” file has been created then we need to parse out the file names and turn them into ASMCMD rm commands:

awk -v db=${ORACLE_SID%[1-8]} '{if (($1" "$2 == "File Name:") && ($3 !~ /spfile/)) {print "asmcmd rm "$3}}' orphaned_files_${ORACLE_SID%[1-8]}_*.log > asmcmd_cleanup.sh

This will generate a file named asmcmd_cleanup.sh containing the commands to delete all the orphaned files that were found in +<DISKGROUP_NAME>/<DB_UNIQUE_NAME> for the database, excluding any file or path containing the string “spfile” (because many places create an alias for spfile). You may want to add further exclusions to this.

It would be very wise to verify the contents of asmcmd_cleanup.sh, but when you are happy that you have excluded what you want to exclude it just needs to be run as a user with appropriate privileges to execute asmcmd and with the appropriate environment set up. For example, if you have implemented GI and RDBMS role separation, e.g., oracle and grid users then you’ll need to run the script as grid.

So that’s the nuts and bolts of the approach. Hopefully you’ll find it useful. If you have a cleaner (or simply better) approach to identify the orphaned files then please let me know.

Finding the ASM Diskgroups for a Database

As mentioned earlier deriving the list of diskgroups can be achieved using:

diskgroups=$(sqlplus -s -L / as sysdba << EOF
set pages 0 feedback off
select listagg (name, ' ') within group (order by name) from v\$asm_diskgroup;
EOF
)

Generating File List for all Databases in /etc/oratab

… and if you have a number of databases you want to run this for in a single pass then you probably want to use something like the following in a script:

ORAENV_ASK=NO
for inst in $(awk -F":" '{if(($1 ~ /[1-8]$/) && ($1 !~ /+ASM/ )) {print $1}}' /etc/oratab); do
ORACLE_SID=${inst}
. oraenv
diskgroups=$(sqlplus -s -L / as sysdba << EOF
set pages 0 feedback off
select listagg (name, ' ') within group (order by name) from v\$asm_diskgroup;
EOF
)
for diskgroup in $diskgroups; do
rman log=orphaned_files_${inst%[1-8]}_${diskgroup}.log << EOF
connect target /
catalog start with '+${diskgroup}/${inst%[1-8]}';
EOF
done
done

You just need to combine the above with the awk command to generate the asmcmd rm commands and you’re got something that will run for all the databases on a given host and generate a script file that can be run from the GI/ASM environment (i.e., with asmcmd in your path) in order to clear up the files that the databases do not know about.

A final word of caution: Watch you don’t remove backupsets that have been kept for a reason, but the current control file doesn’t know about.

As ever, feedback most welcome, even if I’ve help you to delete something you didn’t want to remove and you’re mad with me :-(

How to rename ASM diskgroup in 11.2

I hope you always do thing right from the first attempt. Sadly I am not. I have generate the Database on incorrectly named ASM Diskgroups.
And there was no space to create the new one to switch to image copy. Luckly it was 11g database that have ASM disk group rename option.
This is the my step by step instruction how to rename ASM diskgroup with RAC database running on it.

INSTRUCTIONS
1.Switch to clusterware environment
2.Get DB configuration

$ srvctl config database -d orcl

Database unique name: orcl
Database name:
Oracle home: /u01/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +ORA_FRA/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups: ORA_DATA,ORA_REDO,ORA_FRA
Mount point paths:
Services: srvc_orcl
Type: RAC
Database is administrator managed

3. Get ASM configuration

$ srvctl config asm -a

ASM home: /u01/grid/11.2.0.3
ASM listener: LISTENER
ASM is enabled.

4. If there is no SPFILE check init.ora files

$ cat /u01/grid/11.2.0.3/dbs/init+ASM2.ora
*.SPFILE='/dev/asmspfile'
$ cat /u01/grid/11.2.0.3/dbs/init+ASM1.ora
*.SPFILE='/dev/asmspfile'

5. Prepare Database configuration

5.1 Backup spfile
change environment to the orcl

sqlplus "/as sysdba"
create pfile='/u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp' from spfile;

File created.

5.2 Prepare commands to rename files

SQL > select 'ALTER DATABASE RENAME FILE '''||MEMBER||''' TO '''||REPLACE(MEMBER,'+ORA_REDO','+NEW_REDO')||'''; ' FROM v$logfile;
SQL > select 'ALTER DATABASE RENAME FILE '''||NAME||''' TO '''||REPLACE(NAME,'+ORA_DATA','+NEW_DATA')||'''; ' FROM v$datafile;
SQL > select 'ALTER DATABASE RENAME FILE '''||NAME||''' TO '''||REPLACE(NAME,'+ORA_DATA','+NEW_DATA')||'''; ' FROM v$tempfile;

– sample output

ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_1.271.783943471' TO '+NEW_REDO/orcl/onlinelog/group_1.271.783943471';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_2.259.783944261' TO '+NEW_REDO/orcl/onlinelog/group_2.259.783944261';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_3.269.783943509' TO '+NEW_REDO/orcl/onlinelog/group_3.269.783943509';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_4.267.783943593' TO '+NEW_REDO/orcl/onlinelog/group_4.267.783943593';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_12.265.783944075' TO '+NEW_REDO/orcl/onlinelog/group_12.265.783944075';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_11.257.783944289' TO '+NEW_REDO/orcl/onlinelog/group_11.257.783944289';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_13.263.783944091' TO '+NEW_REDO/orcl/onlinelog/group_13.263.783944091';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_14.260.783944103' TO '+NEW_REDO/orcl/onlinelog/group_14.260.783944103';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/system.258.783943013' TO '+NEW_DATA/orcl/datafile/system.258.783943013';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/sysaux.262.783942959' TO '+NEW_DATA/orcl/datafile/sysaux.262.783942959';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/undotbs1.261.783942985' TO '+NEW_DATA/orcl/datafile/undotbs1.261.783942985';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/data.263.783942913' TO '+NEW_DATA/orcl/datafile/data.263.783942913';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/undotbs2.259.783943011' TO '+NEW_DATA/orcl/datafile/undotbs2.259.783943011';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/tempfile/temp.281.783943239' TO '+NEW_DATA/orcl/tempfile/temp.281.783943239';

5.3 Prepare new initialisation parameter file

$ cp /u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp /u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.old
$ vi /u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp

replace

.control_files
.db_create_file_dest
.db_create_online_log_dest_1
.db_create_online_log_dest_2
.db_recovery_file_dest
and all other parameters containing old DISK GROUPS names

5.4 Prepare database for moving: disable block change tracking and flashback

SQL > alter database disable block change tracking;
SQL > alter database flashback off;

6. Stop databases
$ Switch to clusterware environment

srvctl stop database -d orcl

7. Unmount Diskgroups on all nodes

$ asmcmd umount ORA_DATA 
$ asmcmd umount ORA_FRA 
$ asmcmd umount ORA_REDO

– check that all groups are unmounted

$ asmcmd lsdg 

8. Run rename discgroups commands
– stop the second node to leave first node exclusive owner

$ crsctl stop has on node 2
$ renamedg phase=both dgname=ORA_DATA newdgname=NEW_DATA verbose=true 
$ renamedg phase=both dgname=ORA_FRA  newdgname=NEW_FRA verbose=true 
$ renamedg phase=both dgname=ORA_REDO newdgname=NEW_REDO verbose=true 

9. Mount renamed Diskgroups

$ asmcmd mount NEW_DATA
$ asmcmd mount NEW_FRA
$ asmcmd mount NEW_REDO

– check that all groups are mounted

$ asmcmd lsdg 

9. Bring up orcl
change environment to the orcl

$ sqlplus "/as sysdba"
SQL > startup nomount pfile='/u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp'
SQL > alter database mount;

10. Run prepared rename files commands

SQL > ...
SQL > create spfile='+NEW_FRA/orcl/spfileorcl.ora' from pfile='/u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp'

11. modify link to spfile

$ vi /u01/oracle/product/11.2.0/db_1/dbs/initorcl1.ora on node 1

and

$ vi /u01/oracle/product/11.2.0/db_1/dbs/initorcl2.ora on node 2

to poing to new DISKGROUP

12. modify database configuration in clusterware

$ srvctl modify database -d orcl -p +NEW_FRA/orcl/spfileorcl.ora
$ srvctl modify database -d orcl -a "NEW_DATA,NEW_FRA,NEW_REDO"
$ srvctl config database -d orcl 
$ srvctl start database -d orcl 

13. enable temporary disable functionality

SQL >  alter database enable  block change tracking;
SQL >  alter database flashback on;

14. Delete old DISKGROUP RESOURCES

$ crsctl delete resource ora.ORA_DATA.dg
$ crsctl delete resource ora.ORA_FRA.dg
$ crsctl delete resource ora.ORA_REDO.dg

As result all the configuration that has been done inside database has been saved

What’s in a Name?

I’ve been meaning to write this up for a while now, but just haven’t found the time. Anyway, this is a little “gotcha” for those installing 11.2 Grid Infrastructure that care about consistency of naming… Maybe you don’t? Maybe I shouldn’t?

While building a 4 node RAC system I got to the point of:

You must run the root.sh script on the first node and wait for it to finish. If your cluster has four or more nodes, then root.sh can be run concurrently on all nodes but the first and last. As with the first node, the root.sh script on the last node must be run separately.

So, I merrily run root.sh and afterwards find that my ASM instances are named in a way I didn’t like or expect. My 4 servers were named: 0ra11-2-1, 0ra11-2-2, 0ra11-2-3, 0ra11-2-4; and I ended up with ASM instances: +ASM1, +ASM2, +ASM3, +ASM4. All as you’d expect. However, +ASM2 was running on ora11-2-3 and +ASM3 was running on ora11-2-2!

Q1: Does it really matter?

A1: No. At least I can’t see a reason why it would matter, but if you can think of any then please comment.

Q2: Did I want to understand why it happened and how to avoid it?

A2: Of course.

So, a little digging and experimentation later I found what I believe to be the cause of the “problem”. In the rootcrs_`hostname`.log files I found the start time and the point where the ASM instance is created.

Note: There wasn’t anything specifically stating that the ASM instance was being created, but while running root.sh during later tests I watched for the creation of the ASM record in /etc/oratab and correlated that with the log file.

Start of the root.sh on nodes 2 and 3:

[root@ora11-2-2 ~]# grep "The configuration" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:48:48: The configuration parameter file /u01/app/11.2.0.2/grid/crs/install/crsconfig_params is valid

[root@ora11-2-3 ~]# grep "The configuration" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:48:54: The configuration parameter file /u01/app/11.2.0.2/grid/crs/install/crsconfig_params is valid

Creation of ASM instance on nodes 2 and 3:

[root@ora11-2-2 ~]# grep "Start of resource \"ora.cluster_interconnect.haip\" Succeeded" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:56:50: Start of resource "ora.cluster_interconnect.haip" Succeeded

[root@ora11-2-3 ~]# grep "Start of resource \"ora.cluster_interconnect.haip\" Succeeded" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:56:34: Start of resource "ora.cluster_interconnect.haip" Succeeded

The key thing to note is the times. The running of root.sh on ora11-2-2 started before ora11-2-3, but for whatever reason it got to the creation of the ASM instance on ora11-2-3 before it did on ora11-2-3.

I found it impossible to leave the system with the naming mismatch, so used rootcrs.pl to deconfigure Clusterware and re-ran root.sh, this time allowing it to finish on each node before starting the next. I ended with the ASM instance names matching the hostnames and got on with creating databases.

I haven’t tested this or dug deep enough into the code to be 100% sure of the above explanation, so if anyone has alternative suggestions then please share them.

Rename Database When Using ASM

This post has been a bit long in the making… Whilst finishing it off I noted the dates in the output generated when I was first doing this… Just under 12 months to get round to finishing it off… There’s never enough time to do everything I want to do :-)

Anyway, as part of some Data Guard testing, which included backing up from a physical standby, I created a couple of new virtual machines using a Oracle supplied template. When I found that the database in the template was named something other than what I wanted my mind ran ahead to wondering what happens when you use NEWDBID with a database that resides in ASM…

I thought that maybe Oracle just handles everything for you: You run NEWDBID to change the database name and Oracle moves everything inside ASM to reflect the new database name (not that it would really need to “move” anything)… Well, it doesn’t, and maybe that would be Oracle taking a little too much control away from the DBA? Anyway, here is what happened and how I “fixed” it.

The virtual machine template “OVM_EL5U2_X86_64_ORACLE10G_PVM” comes with a preconfigured Oracle 10g database named “orcl”:

[oracle@zinc ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Apr 3 22:49:39 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
Connected.
SQL> show parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      orcl
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Which comes configured to use ASM…

[oracle@zinc ~]$ . oraenv
ORACLE_SID = [orcl] ? +ASM
[oracle@zinc ~]$ asmcmd -p
ASMCMD [+] > ls -l
State    Type    Rebal  Unbal  Name
MOUNTED  NORMAL  N      N      DATA/
ASMCMD [+] > cd DATA
ASMCMD [+DATA] > ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    DB_UNKNOWN/
                                        Y    ORCL/
ASMCMD [+DATA] >

But, I didn’t want my database to be called “orcl”. Although, there was no reason why I NEEDED to change the database name for my planned Data Guard testing. Once I’d considered renaming the database there was no turning back…

Whilst connected to the ASM instance I checked the location of the spfile:

ASMCMD [+] > ls -l DATA/ORCL
Type           Redund  Striped  Time             Sys  Name
                                                 Y    CONTROLFILE/
                                                 Y    DATAFILE/
                                                 Y    ONLINELOG/
                                                 Y    TEMPFILE/
                                                 N    spfileorcl.ora => +DATA/DB_UNKNOWN/PARAMETERFILE/SPFILE.269.683246731
ASMCMD [+] >

I didn’t like that. “DB_UNKNOWN” is a bit ugly to my eyes. It was time to run NEWDBID (nid) and see what happens to the ASM directory structure.

Resetting my environment:

[oracle@zinc ~]$ . oraenv
ORACLE_SID = [+ASM] ? orcl

Opening the database in mount mode:

[oracle@zinc ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Apr 3 22:58:57 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
Connected.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  612368384 bytes
Fixed Size                  2085872 bytes
Variable Size             180358160 bytes
Database Buffers          423624704 bytes
Redo Buffers                6299648 bytes
Database mounted.
SQL> exit

Running the “nid” command:

[oracle@zinc ~]$ nid target=/ setname=true dbname=dg10g

DBNEWID: Release 10.2.0.4.0 - Production on Fri Apr 3 23:00:50 2009

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to database ORCL (DBID=1192025583)

Connected to server version 10.2.0

Control Files in database:
    +DATA/orcl/controlfile/current.261.664244847
    +DATA/orcl/controlfile/current.260.664244847

Change database name of database ORCL to DG10G? (Y/[N]) => Y

Proceeding with operation
Changing database name from ORCL to DG10G
    Control File +DATA/orcl/controlfile/current.261.664244847 - modified
    Control File +DATA/orcl/controlfile/current.260.664244847 - modified
    Datafile +DATA/orcl/datafile/system.256.664244797 - wrote new name
    Datafile +DATA/orcl/datafile/undotbs1.258.664244797 - wrote new name
    Datafile +DATA/orcl/datafile/sysaux.257.664244797 - wrote new name
    Datafile +DATA/orcl/datafile/users.259.664244797 - wrote new name
    Datafile +DATA/orcl/tempfile/temp.268.664244883 - wrote new name
    Control File +DATA/orcl/controlfile/current.261.664244847 - wrote new name
    Control File +DATA/orcl/controlfile/current.260.664244847 - wrote new name
    Instance shut down

Database name changed to DG10G.
Modify parameter file and generate a new password file before restarting.
Succesfully changed database name.
DBNEWID - Completed succesfully.

Creating a pfile to point to the spfile, which has been PARTIALLY updated with parameters for dg10g as part of running nid:

[oracle@zinc ~]$ cat << EOF > $ORACLE_HOME/dbs/initdg10g.ora
> spfile='+DATA/ORCL/spfileorcl.ora'
> EOF

Creating a backup of the newly created pfile as it will be overwritten as part of later stages:

[oracle@zinc ~]$ cp $ORACLE_HOME/dbs/initdg10g.ora $ORACLE_HOME/dbs/initdg10g.ora.spfile

Creating a password file for dg10g:

[oracle@zinc ~]$ orapwd file=$ORACLE_HOME/dbs/orapwdg10g password=xxxxxxxxxx*

* It strikes me that not being able to run orapwd without the password argument is a bit of security flaw – This as been addressed at 11.1 and above.

Modifying /etc/oratab to reflect the new database name:

Change “orcl:/u01/app/oracle/product/10.2.0/db_1:W” to “dg10g:/u01/app/oracle/product/10.2.0/db_1:W”

Seting environment for new database name and sid:

[oracle@zinc ~]$ . oraenv
ORACLE_SID = [orcl] ? dg10g
[oracle@zinc ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Apr 3 23:35:19 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup nomount
ORACLE instance started.

Total System Global Area  612368384 bytes
Fixed Size                  2085872 bytes
Variable Size             167775248 bytes
Database Buffers          436207616 bytes
Redo Buffers                6299648 bytes
SQL> create pfile from spfile;

File created.

SQL> shutdown
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> exit

Editing $ORACLE_HOME/dbs/initdg10g.ora

orcl.__db_cache_size=423624704
dg10g.__db_cache_size=436207616
orcl.__java_pool_size=4194304
orcl.__large_pool_size=4194304
dg10g.__large_pool_size=4194304
orcl.__shared_pool_size=171966464
orcl.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/orcl/adump'
*.background_dump_dest='/u01/app/oracle/admin/orcl/bdump'
*.compatible='10.2.0.3.0'
*.control_files='+DATA/orcl/controlfile/current.261.664244847','+DATA/orcl/controlfile/current.260.664244847'
*.core_dump_dest='/u01/app/oracle/admin/orcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+DATA'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.job_queue_processes=10
*.local_listener='(ADDRESS = (PROTOCOL=TCP) (HOST=zinc) (PORT=1521))'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.shared_servers=5
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/orcl/udump'

Note that some parameters have already been set for “dg10g”. In order to achieve what I want in terms of removing all references to orcl from the directory structure, all parameters that contain “orcl” in their value were updated. My perferred approach is to use “:%s/orcl/dg10g/g” in vi, but approaches to this type of task are often down to personal preference :-)

My find and replace approach left me with two entires for __db_cache_size and __large_pool_size, so one of each was removed.

As I would be recreating the control files in the desired location the CONTROL_FILES parameter was further modified to simply provide the ASM disk group that the files are to be created in.

My final parameter file is show below:

dg10g.__db_cache_size=436207616
dg10g.__java_pool_size=4194304
dg10g.__large_pool_size=4194304
dg10g.__shared_pool_size=171966464
dg10g.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/dg10g/adump'
*.background_dump_dest='/u01/app/oracle/admin/dg10g/bdump'
*.compatible='10.2.0.3.0'
*.control_files='+DATA','+DATA'
*.core_dump_dest='/u01/app/oracle/admin/dg10g/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='dg10g'
*.db_recovery_file_dest='+DATA'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=dg10gXDB)'
*.job_queue_processes=10
*.local_listener='(ADDRESS = (PROTOCOL=TCP) (HOST=zinc) (PORT=1521))'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.shared_servers=5
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/dg10g/udump'

Creating the *_dest directories:

[oracle@zinc ~]$ mkdir -p /u01/app/oracle/admin/dg10g/adump
[oracle@zinc ~]$ mkdir /u01/app/oracle/admin/dg10g/bdump
[oracle@zinc ~]$ mkdir /u01/app/oracle/admin/dg10g/cdump
[oracle@zinc ~]$ mkdir /u01/app/oracle/admin/dg10g/udump

In order to have Oracle automatically update the spfile with the location of the new control files via the = ALTER SYSTEM SET CONTROL_FILES command, an spfile should be created from the above pfile.

[oracle@zinc ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Apr 3 23:42:20 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup nomount
ORACLE instance started.

Total System Global Area  612368384 bytes
Fixed Size                  2085872 bytes
Variable Size             167775248 bytes
Database Buffers          436207616 bytes
Redo Buffers                6299648 bytes
SQL> create spfile = '+DATA/DG10G/spfiledg10g.ora' from pfile;

File created.

SQL> shutdown
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> exit

The backup (manually created earlier) pfile will now be used to point to the new ASM spfile.

[oracle@zinc dbs]$ mv initdg10g.ora initdg10g.ora.pfile
[oracle@zinc dbs]$ cp initdg10g.ora.spfile initdg10g.ora

Modifying initdg10g.ora to point to the correct spfile:

SPFILE='+DATA/gg02/spfilegg02.ora'

Using the controlfiles within +DATA/ORCL to create a new controlfiles under +DATA/DG10G:

[oracle@zinc ~]$ rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Apr 3 23:45:39 2009

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area     612368384 bytes

Fixed Size                     2085872 bytes
Variable Size                167775248 bytes
Database Buffers             436207616 bytes
Redo Buffers                   6299648 bytes

RMAN> restore controlfile from '+DATA/orcl/controlfile/current.261.664244847';

Starting restore at 03-APR-09
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
output file name=+DATA/dg10g/controlfile/current.270.683250567
output file name=+DATA/dg10g/controlfile/current.271.683250767
Finished restore at 03-APR-09

RMAN> sql "alter system set control_files = ''+DATA/dg10g/controlfile/current.270.683250567'',''+DATA/dg10g/controlfile/current.271.683250767'' scope = spfile";

sql statement: alter system set control_files = ''+DATA/dg10g/controlfile/current.270.683250567'',''+DATA/dg10g/controlfile/current.271.683250767'' scope = spfile

RMAN> shutdown

Oracle instance shut down

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area     612368384 bytes

Fixed Size                     2085872 bytes
Variable Size                167775248 bytes
Database Buffers             436207616 bytes
Redo Buffers                   6299648 bytes

Using report schema to see where the datafiles are currently located within ASM and then using BACKUP AS COPY to create a new copy of the files in the DG10G directory.

RMAN> report schema;

Report of database schema

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    600      SYSTEM               ***     +DATA/orcl/datafile/system.256.664244797
2    460      UNDOTBS1             ***     +DATA/orcl/datafile/undotbs1.258.664244797
3    400      SYSAUX               ***     +DATA/orcl/datafile/sysaux.257.664244797
4    5        USERS                ***     +DATA/orcl/datafile/users.259.664244797

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    29       TEMP                 32767       +DATA/orcl/tempfile/temp.268.664244883

RMAN> backup as copy database;

Starting backup at 04-APR-2009 00:35:53
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=+DATA/orcl/datafile/system.256.664244797
output filename=+DATA/dg10g/datafile/system.259.683253353 tag=TAG20090404T003553 recid=17 stamp=683253441
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:35
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=+DATA/orcl/datafile/undotbs1.258.664244797
output filename=+DATA/dg10g/datafile/undotbs1.275.683253449 tag=TAG20090404T003553 recid=18 stamp=683253517
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=+DATA/orcl/datafile/sysaux.257.664244797
output filename=+DATA/dg10g/datafile/sysaux.257.683253525 tag=TAG20090404T003553 recid=19 stamp=683253586
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:06
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=+DATA/orcl/datafile/users.259.664244797
output filename=+DATA/dg10g/datafile/users.274.683253589 tag=TAG20090404T003553 recid=20 stamp=683253590
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=+DATA/dg10g/controlfile/backup.258.683253593 tag=TAG20090404T003553 recid=21 stamp=683253595
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 04-APR-2009 00:40:00

RMAN> list copy of database;

List of Datafile Copies
Key     File S Completion Time      Ckp SCN    Ckp Time             Name
------- ---- - -------------------- ---------- -------------------- ----
17      1    A 04-APR-2009 00:37:21 1172322    04-APR-2009 00:26:45 +DATA/dg10g/datafile/system.259.683253353
18      2    A 04-APR-2009 00:38:37 1172322    04-APR-2009 00:26:45 +DATA/dg10g/datafile/undotbs1.275.683253449
19      3    A 04-APR-2009 00:39:46 1172322    04-APR-2009 00:26:45 +DATA/dg10g/datafile/sysaux.257.683253525
20      4    A 04-APR-2009 00:39:50 1172322    04-APR-2009 00:26:45 +DATA/dg10g/datafile/users.274.683253589

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/dg10g/datafile/system.259.683253353"
datafile 2 switched to datafile copy "+DATA/dg10g/datafile/undotbs1.275.683253449"
datafile 3 switched to datafile copy "+DATA/dg10g/datafile/sysaux.257.683253525"
datafile 4 switched to datafile copy "+DATA/dg10g/datafile/users.274.683253589"

RMAN> alter database open;

database opened

RMAN> report schema;

Report of database schema

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    600      SYSTEM               ***     +DATA/dg10g/datafile/system.259.683253353
2    460      UNDOTBS1             ***     +DATA/dg10g/datafile/undotbs1.275.683253449
3    400      SYSAUX               ***     +DATA/dg10g/datafile/sysaux.257.683253525
4    5        USERS                ***     +DATA/dg10g/datafile/users.274.683253589

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    29       TEMP                 32767       +DATA/orcl/tempfile/temp.268.664244883

Next job is to move the tempfile

RMAN> sql 'alter tablespace temp add tempfile';

sql statement: alter tablespace temp add tempfile

RMAN> sql "alter database tempfile ''+DATA/orcl/tempfile/temp.268.664244883'' drop";

sql statement: alter database tempfile ''+DATA/orcl/tempfile/temp.268.66424488'' drop

RMAN> exit

At this point the only files (used by the database/instance) left in +DATA/orcl are the Online logs. As far as I’m aware the only way to move these is to drop and recreate them, but with a little bit of thought and appropriate us of commands this can be done with the database open and in use.

Using the following command to create a new online redo log group. This relies on DB_CREATE_FILE_DEST, or DB_CREATE_ONLINE_LOG_DEST_n, parameters being set.

SQL> alter database add logfile group <a group number that does not already exist>;

Once the new online redo log group has been created in the DG10G directory within the ASM diskgroup, the existing online redo log groups in the ORCL directory were dropped using:

SQL> alter database drop logfile group <a group number that is not in use and is in the ORCL directory>;

Something to note is that, quite obviously, you cannot drop a log file group that is currently in use (ORA-01623), one that is need for crash recovery (ORA-01624) or reduced the number of redo log groups to less than 2 (ORA-01567)… Hence the comment about thought and appropriate commands. If you are trying this or doing something similar then the following commands might be helpful:

SQL> alter system switch logfile;
SQL> alter system checkpoint;
SQL> alter system archive log group <group requiring archiving>;

Anyway, after a bit of juggling I got what I wanted…

SQL> select member from v$logfile
  2  order by member;

MEMBER
--------------------------------------------------------------------------------
+DATA/dg10g/onlinelog/group_1.262.683378145
+DATA/dg10g/onlinelog/group_1.263.683378123
+DATA/dg10g/onlinelog/group_2.264.683377935
+DATA/dg10g/onlinelog/group_2.265.683377913
+DATA/dg10g/onlinelog/group_3.266.683378249
+DATA/dg10g/onlinelog/group_3.267.683378229

6 rows selected.

Whilst finally writing this up it has occurred to me that there are other/better ways of achieving the same result, i.e., just duplicate the database… Nonetheless, I found this a useful exercise and if you read this far then it can’t have been too dull :-)

“Oracle SETASMGIDWRAP” (An illegitimate Google Whack)

When creating a database on a new or seldom used version of Oracle I always like to use the “Generate Database Creation Scripts” option of DBCA so I can have a look at what it’s going to do for me before I decide which scripts I want to run. I recently saw the following line in one of these scripts generated by an 11.2 DBCA:

host /u01/grid/11.2.0/bin/setasmgidwrap o=/u01/app/oracle/product/11.2.0/db_1/bin/oracle 

The name is pretty self explanatory (it’s the step that changes the group ownership of the oracle binary from the software owner’s primary group (probably OINSTALL) to the OSASM group) but I thought I’d have a quick search on My Oracle Support to find out more…

No results found.

Nothing unusual there, since Metalink was downgraded to its present state I’ve become accustomed to this message :). Let’s try the Oracle documentation:

Your search term oracle setasmgidwrap did not match any topics.

Surely Google will hit something…

Your search – oracle setasmgidwrap – did not match any documents

No hits again. So I’ve created this post purely to create an illegitimate Oracle Google Whack. At least once this post is indexed by the search engines. It’s illegitimate by the way because:

A Googlewhack must consist of two actual words found in a dictionary.

Below is a demonstration of SETASMGIDWRAP in action (watch the group ownership on the oracle binary):

%*> ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x    1 oracle   oinstall  260199063 Jan 12 15:15 /u01/app/oracle/product/11.2.0/db_1/bin/oracle
%*> $CRS_HOME/bin/setasmgidwrap o=$ORACLE_HOME/bin/oracle
%*> ls -l $ORACLE_HOME/bin/oracle
-r-sr-s--x    1 oracle   asmadmin  260199063 Jan 12 15:15 /u01/app/oracle/product/11.2.0/db_1/bin/oracle 

Grid Control with ASM

A colleague recently hit a problem configuring Grid Control with a repository database that uses ASM.

You do not have sufficient privilege to write to the specified patch for tablespaces

He checked out MetaLink or, as I should say, “My Oracle Support” and found note 738445.1

… Now, I don’t know if I’m misunderstanding what it is suggesting, but an alternative solution that I have used previously is to specify a file system location for both the management tablespaces, then move them to ASM using RMAN after they have been created.

In summary:

  1. Allocate sufficient temporary storage (referred to as the “staging” area) for the Grid Control specific tablespaces (2GB should be more than enough judging by the size of these tablespaces in my current installation)
  2. Install Grid Control (I followed Jeff Hunter’s excellent instructions) specifying a location on the “staging” filesystem for the Grid Control tablespaces
  3. Post installation, shutdown the OMS using (assuming ORACLE_HOME set to OMS home):
  4. $ $ORACLE_HOME/opmn/bin/opmnctl stopall

  5. Use RMAN to backup “as copy” the relevant tablespaces (MGMT_TABLESPACE and MGMT_ECM_DEPOT_TS) with format ‘+DATA’ (or other disk group as applicable), i.e.:
  6. RMAN> backup as copy tablespace MGMT_TABLESPACE format '+DATA';
    RMAN> backup as copy tablespace MGMT_ECM_DEPOT_TS format '+DATA';

  7. Take relevant tablespaces offline
  8. RMAN> sql 'alter tablespace MGMT_TABLESPACE offline';
    RMAN> sql 'alter tablespace MGMT_ECM_DEPOT_TS offline';

  9. Switch tablespaces to copy using RMAN
  10. RMAN> switch tablespace MGMT_TABLESPACE to copy;
    RMAN> switch tablespace MGMT_ECM_DEPOT_TS to copy;

  11. Recover tablespaces
  12. RMAN> recover tablespace MGMT_TABLESPACE;
    RMAN> recover tablespace MGMT_ECM_DEPOT_TS;

  13. Bring tablespaces back online
  14. RMAN> sql 'alter tablespace MGMT_TABLESPACE online';
    RMAN> sql 'alter tablespace MGMT_ECM_DEPOT_TS online';

  15. Start the OMS processes (assuming ORACLE_HOME set to OMS home)
  16. $ $ORACLE_HOME/opmn/bin/opmnctl startall

Now that really has got to be easier than (only edited highlights because repoducing MetaLink/My Oracle Support Documentation is not going to win me friends at Oracle:

Do not install grid console on the local machine on which ASM instance exist.

Then

After successful installation of grid control console, install additional OMS on local machine where the ASM is installed.

Maybe I missed something that is RAC specific, but the note does state:

Same problem occurs for

+ Multiple node RAC Database.
+ Single node RAC Database.
+ Single instance Database.

With RAC the logical option would be to use a clustered filesystem as the tablespace staging area, but I can’t see why dropping to a single node RAC configuration in order to get Grid Control setup and then moving back to multi-node would not work…

Anyone out there with other experiences of this?