NOLOGGING in numbers

Hi All
I have made small investigation about redo generation. From early days of my career I was remember that nologging operation is very performance effective but never try to quantify this very.
Every application can theoretically be split into 4 groups of tables (I use my personal names but hopefully it has sense):
1) Regular tables – contain valuable information need to be stored for legal and functional purposes. Stored as normal tables.
2) Staging tables – contain process lifetime specific information, easily re-creatable. Used for transferring information between sessions and report generation. Stored as regular tables or materialized views.
3) Session Temporary tables– contain process lifetime specific information, easily re-creatable. Used for reporting stored as GLOBAL TEMPORARY tables ON COMMIT PRESERVE.
4) Transaction Temporary tables– contain process lifetime specific information, easily re-creatable. Used for processing optimisation stored as GLOBAL TEMPORARY tables ON COMMIT DELETE.
By default all 4 groups generate REDO logs records that can be significant amount of resources. The redo information is valuable if we:
1) Support StandBy database
2) Information inside tables is valuable and have to be safe in case of database crush.
To make the standby or backup completely usable after a nologging statement is run, a mechanism other than database recovery must be used to get or create current copies of the affected blocks. You have to drop and recreate the object with the invalidated blocks or truncate it, using the program that maintains the object. Thus extra step to manage switchover/failover to standby database process have to be introduced.
Again based on my understanding the only business requirements for logging is to keep data from “Regular tables”. The safety of the data from other groups is not such important.
The only DML operation that can be optimised in terms of REDO log generation is INSERT with APPEND hint. (MERGE is actually presentation layer above INSERT thus can be treated together) . Hint APPEND if it works have one negative issue. The data in new table is not actually available until end of transaction.Due to the following error.
ORA-12838: cannot read/modify an object after modifying it in parallel
It linked to the fact that oracle could not make consistent model of block if there is no UNDO information. This actually makes using this hint on Global Temporary tables with ON COMMIT DELETE rows unreasonable. You can insert data but never be able to use it until it would be deleted.
Another fact that I have to highlight UPDATE and DELETE always generate REDO information. Thus if the table intensively update the gains would be minimal. Avoiding this operation on a temporary tables is another skills that developers have to be used to for optimal performance of your application.
There are 5 parameters that actually affect SEGMENT logging: Database LOGGING, Database FORCE LOGGING, Tablespace LOGGING, Tablespace FORCE LOGGING (Can be switched on tablespaces with “Regular tables” and switched off on tablespaces with “Staging tables” , Table LOGGIN. Global Temporary tables actually always in NOLOGGING mode thus we can assume for table groups “Session Temporary tables” and “Transaction Temporary tables” always have all parameters equal to NO. Production databases should always be in protected mode thus the value DATABASE LOGGING should always be in YES, it takes value NO outside of investigation.
To test I have created the table TEST.BIGTABLE (column1 NUMBER) with 39999960 rows and few tables to generate INSERT as SELECT statement from BIGTABLE dataset. The results are below.

Regular table

TABLE LOGGING * * N Y N Y Y
TABLESPACE LOGGING * * Y N N Y Y
TABLESPACE FORCE LOGGING * Y N N N N N
DATABASE LOGGING Y Y Y Y Y Y Y
DATABASE FORCE LOGGING Y N N N N N N
Amount of redo for INSERT APPEND 501000K 501000K 457K 501000K 456K 501000K 501000K
Amount of redo for Standard INSERT AS SELECT 501000K 501000K 501000K 501000K 501000K 501000K 501000K

Amount of redo for temporary tables

Standard INSERT AS SELECT INSERT APPEND value
Transaction Temp Table 110K 0.3K
Session Temp Table 110K 0.3K

Hope all above have sense and can be used for good

P.S. The “redo size” values has been got from AUTOTRACE statistics.

How to rename ASM diskgroup in 11.2

I hope you always do thing right from the first attempt. Sadly I am not. I have generate the Database on incorrectly named ASM Diskgroups.
And there was no space to create the new one to switch to image copy. Luckly it was 11g database that have ASM disk group rename option.
This is the my step by step instruction how to rename ASM diskgroup with RAC database running on it.

INSTRUCTIONS
1.Switch to clusterware environment
2.Get DB configuration

$ srvctl config database -d orcl

Database unique name: orcl
Database name:
Oracle home: /u01/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +ORA_FRA/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups: ORA_DATA,ORA_REDO,ORA_FRA
Mount point paths:
Services: srvc_orcl
Type: RAC
Database is administrator managed

3. Get ASM configuration

$ srvctl config asm -a

ASM home: /u01/grid/11.2.0.3
ASM listener: LISTENER
ASM is enabled.

4. If there is no SPFILE check init.ora files

$ cat /u01/grid/11.2.0.3/dbs/init+ASM2.ora
*.SPFILE='/dev/asmspfile'
$ cat /u01/grid/11.2.0.3/dbs/init+ASM1.ora
*.SPFILE='/dev/asmspfile'

5. Prepare Database configuration

5.1 Backup spfile
change environment to the orcl

sqlplus "/as sysdba"
create pfile='/u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp' from spfile;

File created.

5.2 Prepare commands to rename files

SQL > select 'ALTER DATABASE RENAME FILE '''||MEMBER||''' TO '''||REPLACE(MEMBER,'+ORA_REDO','+NEW_REDO')||'''; ' FROM v$logfile;
SQL > select 'ALTER DATABASE RENAME FILE '''||NAME||''' TO '''||REPLACE(NAME,'+ORA_DATA','+NEW_DATA')||'''; ' FROM v$datafile;
SQL > select 'ALTER DATABASE RENAME FILE '''||NAME||''' TO '''||REPLACE(NAME,'+ORA_DATA','+NEW_DATA')||'''; ' FROM v$tempfile;

– sample output

ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_1.271.783943471' TO '+NEW_REDO/orcl/onlinelog/group_1.271.783943471';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_2.259.783944261' TO '+NEW_REDO/orcl/onlinelog/group_2.259.783944261';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_3.269.783943509' TO '+NEW_REDO/orcl/onlinelog/group_3.269.783943509';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_4.267.783943593' TO '+NEW_REDO/orcl/onlinelog/group_4.267.783943593';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_12.265.783944075' TO '+NEW_REDO/orcl/onlinelog/group_12.265.783944075';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_11.257.783944289' TO '+NEW_REDO/orcl/onlinelog/group_11.257.783944289';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_13.263.783944091' TO '+NEW_REDO/orcl/onlinelog/group_13.263.783944091';
ALTER DATABASE RENAME FILE '+ORA_REDO/orcl/onlinelog/group_14.260.783944103' TO '+NEW_REDO/orcl/onlinelog/group_14.260.783944103';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/system.258.783943013' TO '+NEW_DATA/orcl/datafile/system.258.783943013';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/sysaux.262.783942959' TO '+NEW_DATA/orcl/datafile/sysaux.262.783942959';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/undotbs1.261.783942985' TO '+NEW_DATA/orcl/datafile/undotbs1.261.783942985';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/data.263.783942913' TO '+NEW_DATA/orcl/datafile/data.263.783942913';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/datafile/undotbs2.259.783943011' TO '+NEW_DATA/orcl/datafile/undotbs2.259.783943011';
ALTER DATABASE RENAME FILE '+ORA_DATA/orcl/tempfile/temp.281.783943239' TO '+NEW_DATA/orcl/tempfile/temp.281.783943239';

5.3 Prepare new initialisation parameter file

$ cp /u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp /u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.old
$ vi /u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp

replace

.control_files
.db_create_file_dest
.db_create_online_log_dest_1
.db_create_online_log_dest_2
.db_recovery_file_dest
and all other parameters containing old DISK GROUPS names

5.4 Prepare database for moving: disable block change tracking and flashback

SQL > alter database disable block change tracking;
SQL > alter database flashback off;

6. Stop databases
$ Switch to clusterware environment

srvctl stop database -d orcl

7. Unmount Diskgroups on all nodes

$ asmcmd umount ORA_DATA 
$ asmcmd umount ORA_FRA 
$ asmcmd umount ORA_REDO

– check that all groups are unmounted

$ asmcmd lsdg 

8. Run rename discgroups commands
– stop the second node to leave first node exclusive owner

$ crsctl stop has on node 2
$ renamedg phase=both dgname=ORA_DATA newdgname=NEW_DATA verbose=true 
$ renamedg phase=both dgname=ORA_FRA  newdgname=NEW_FRA verbose=true 
$ renamedg phase=both dgname=ORA_REDO newdgname=NEW_REDO verbose=true 

9. Mount renamed Diskgroups

$ asmcmd mount NEW_DATA
$ asmcmd mount NEW_FRA
$ asmcmd mount NEW_REDO

– check that all groups are mounted

$ asmcmd lsdg 

9. Bring up orcl
change environment to the orcl

$ sqlplus "/as sysdba"
SQL > startup nomount pfile='/u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp'
SQL > alter database mount;

10. Run prepared rename files commands

SQL > ...
SQL > create spfile='+NEW_FRA/orcl/spfileorcl.ora' from pfile='/u01/oracle/product/11.2.0/db_1/dbs/initorcl.ora.bkp'

11. modify link to spfile

$ vi /u01/oracle/product/11.2.0/db_1/dbs/initorcl1.ora on node 1

and

$ vi /u01/oracle/product/11.2.0/db_1/dbs/initorcl2.ora on node 2

to poing to new DISKGROUP

12. modify database configuration in clusterware

$ srvctl modify database -d orcl -p +NEW_FRA/orcl/spfileorcl.ora
$ srvctl modify database -d orcl -a "NEW_DATA,NEW_FRA,NEW_REDO"
$ srvctl config database -d orcl 
$ srvctl start database -d orcl 

13. enable temporary disable functionality

SQL >  alter database enable  block change tracking;
SQL >  alter database flashback on;

14. Delete old DISKGROUP RESOURCES

$ crsctl delete resource ora.ORA_DATA.dg
$ crsctl delete resource ora.ORA_FRA.dg
$ crsctl delete resource ora.ORA_REDO.dg

As result all the configuration that has been done inside database has been saved

Partition pruning using non-prefixed local indexes

This small post is from nice and frighten world of partitions, specifically the case of partition pruning. If you are using partitions in your application there is fair chance that your database is quite big and performance demanding. If you are not working in classic Data Warehousing manner you have to care about DML statement performance. One of the basic rules of DML performance is less indexes you have and less index columns in existing indexes then higher is the DML performance. Thus one of the architectural DBA targets is to reduce the number of indexed columns by just most critical. Another axiom statement is that smaller indexes is easier to support and manage then big one, thus local indexes usually more preferable. The conclusion from two previous axiom is that probably most indexes on partitioned tables in your application is non-prefixed local indexes. Now we came to the topic of this post how oracle optimizer deal with such indexes and what we can do with it.

Lets create list base partitioned table transactions

Column Name Comment
SERNO Unique identifier
PARTITIONKEY Partition Key
SGENERAL Some indexed Field

And create two indexes TRANSACTIONS_PK (SERNO, PARTITIONKEY) and local partitioned non-prefixed index TRANSACTIONSI (SGENERAL).

Let’s start from the most common statement


EXPLAIN PLAN FOR
 SELECT SGENERAL, count(1)
 FROM transactions
 WHERE PARTITIONKEY='JUL2012' AND SGENERAL is not null
 GROUP BY SGENERAL;

PLAN_TABLE_OUTPUT
———————————————————————————————————————————————-
Plan hash value: 2338610280

Id Operation Name

Rows

Bytes

Cost

Time

Pstart

Pstop

0 SELECT STATEMENT

5

55

69978 (1)

00:14:00

1 HASH GROUP BY

5

55

69978 (1)

00:14:00

2 PARTITION LIST SINGLE

5340K

56M

69727 (1)

00:13:57

KEY

KEY

*3 TABLE ACCESS FULL TRANSACTIONS

5340K

56M

69727 (1)

00:13:57

19

19

Predicate Information (identified by operation id):
—————————————————
3 – filter(“STGENERAL” IS NOT NULL)

Does it look like non-optimal? Optimizer has chosen to do FULL SCAN even if it has nice looking index TRANSACTIONSI. The problem is that partition pruning works only for prefixed indexes, that means you have to pay for support extra column in your index even if you do not need it because you clearly can get the partition name based from the identical rules on a table.

But luckily we have extended syntaxes. Uses it we can manually define the partition that should be used. In our case the partition name is equal to the partition key. Getting the partition name is very simple for list partitions but can be received in all other cases too.

EXPLAIN PLAN FOR
 SELECT SGENERAL, count(1)
 FROM TRANSACTIONS PARTITION ("JUL2012")
 WHERE SGENERAL is not null
 GROUP BY SGENERAL;

PLAN_TABLE_OUTPUT
———————————————————————————————————————————————-
Plan hash value: 198159339

Id Operation Name

Rows

Bytes

Cost

Time

Pstart

Pstop

0 SELECT STATEMENT

4

20

10 (20)

00:00:01

1 SORT GROUP BY NOSORT

4

20

10 (20)

00:00:01

2 PARTITION LIST SINGLE

8890

44450

8 (0)

00:00:01

KEY

KEY

*3 INDEX FULL SCAN TRANSACTIONSI

8890

44450

8 (0)

00:00:01

19

19

Predicate Information (identified by operation id):

—————————————————
3 – filter(“SGENERAL” IS NOT NULL)

This plan looks much better. According to optimizer estimations we can save 14 minutes on it. But there is even more attractive syntaxes for those who do not want to spend their time calculating the partition name

 EXPLAIN PLAN FOR
 SELECT SGENERAL, count(1)
 FROM TRANSACTIONS PARTITION FOR ('JUL2012')
 WHERE SGENERAL is not null
 GROUP BY SGENERAL;

PLAN_TABLE_OUTPUT
———————————————————————————————————————————————
Plan hash value: 198159339

Id Operation Name

Rows

Bytes

Cost

Time

Pstart

Pstop

0 SELECT STATEMENT

4

20

10 (20)

00:00:01

1 SORT GROUP BY NOSORT

4

20

10 (20)

00:00:01

2 PARTITION LIST SINGLE

8890

44450

8 (0)

00:00:01

KEY

KEY

*3 INDEX FULL SCAN TRANSACTIONSI

8890

44450

8 (0)

00:00:01

19

19

Predicate Information (identified by operation id):
—————————————————
3 – filter(“SGENERAL” IS NOT NULL)

The only problem with last syntax is that it is supported only from 11.2 version. The syntax was developed to support interval partitions but can be handy for all other types.

Does it looks that with new syntax we came to the kingdom of wealth and prosperity, where we can easily avoid prefixes on local indexes and still use the effective partition pruning? But it is not. The hidden rock is that the partition keys in PARTITION FOR clause could not be defined throw the variables like in WHERE clause. Below there are examples of such attempt

Firstly traditional approach

 DECLARE
 v_part char(7):='JUL2012';
 BEGIN
 FOR REC IN (SELECT SGENERAL, count(1) FROM TRANSACTIONS
 WHERE PARTITIONKEY=v_part AND SGENERAL is not null
 GROUP BY SGENERAL)
 LOOP
 NULL;
 END LOOP;
 END;
 /
 Elapsed: 00:12:42.05

It works but does not looks like very performance effective. Let’s go and try 10g approach in PL/SQL

DECLARE
 v_part varchar2(10):="JUL2012";
 BEGIN
 FOR REC IN (SELECT SGENERAL, count(1) FROM TRANSACTIONS PARTITION (v_part)
 WHERE SGENERAL is not null
 GROUP BY SGENERAL)
 LOOP
 NULL;
 END LOOP;
 END;
 /

ERROR at line 2:
 ORA-06550: line 2, column 25:
 PLS-00201: identifier "JUL2012" must be declared
 ORA-06550: line 2, column 10:
 PL/SQL: Item ignored

It looks like kind of expected behavior. You could not use table name as variable, why you should be able to use partition name. Personally I put a lot of hope on the last test with PARTITIONKEY reference

DECLARE<span style="color: #000000;">
v_part char(7):='JUL2012';
 BEGIN
 FOR REC IN (SELECT SGENERAL, count(1) FROM TRANSACTIONS
 PARTITION FOR (v_part)
 WHERE SGENERAL is not null
 GROUP BY SGENERAL)
 LOOP
 NULL;
 END LOOP;
 END;
 /

ERROR at line 1:
 ORA-14763: Unable to resolve FOR VALUES clause to a partition number
 ORA-06512: at line 4

It was the first time I see the error thus have a look into the documentation

ORA-14763: Unable to resolve FOR VALUES clause to a partition number

Cause: Could not determine the partition corresponding to the FOR VALUES clause.

Action: Remove bind variables and dependencies on session parameters from the values specified in the FOR VALUES clause.

Ha-Ha-Ha nothing change we still could not pass partition key values into query.

Summarizing oracle offer the mechanism that allows to use partition pruning on non-prefixed local indexes but have not support the syntaxes in pl/sql using binds. The usage of this syntaxes have sense only in case of big queries (e.g. reports) running on a table, when gains from effective execution plan prevail over the loses of dynamic sql.

What’s in a Name?

I’ve been meaning to write this up for a while now, but just haven’t found the time. Anyway, this is a little “gotcha” for those installing 11.2 Grid Infrastructure that care about consistency of naming… Maybe you don’t? Maybe I shouldn’t?

While building a 4 node RAC system I got to the point of:

You must run the root.sh script on the first node and wait for it to finish. If your cluster has four or more nodes, then root.sh can be run concurrently on all nodes but the first and last. As with the first node, the root.sh script on the last node must be run separately.

So, I merrily run root.sh and afterwards find that my ASM instances are named in a way I didn’t like or expect. My 4 servers were named: ora11-2-1, ora11-2-2, ora11-2-3, ora11-2-4; and I ended up with ASM instances: +ASM1, +ASM2, +ASM3, +ASM4. All as you’d expect. However, +ASM2 was running on ora11-2-3 and +ASM3 was running on ora11-2-2!

Q1: Does it really matter?

A1: No. At least I can’t see a reason why it would matter, but if you can think of any then please comment.

Q2: Did I want to understand why it happened and how to avoid it?

A2: Of course.

So, a little digging and experimentation later I found what I believe to be the cause of the “problem”. In the rootcrs_`hostname`.log files I found the start time and the point where the ASM instance is created.

Note: There wasn’t anything specifically stating that the ASM instance was being created, but while running root.sh during later tests I watched for the creation of the ASM record in /etc/oratab and correlated that with the log file.

Start of the root.sh on nodes 2 and 3:

[root@ora11-2-2 ~]# grep "The configuration" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:48:48: The configuration parameter file /u01/app/11.2.0.2/grid/crs/install/crsconfig_params is valid

[root@ora11-2-3 ~]# grep "The configuration" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:48:54: The configuration parameter file /u01/app/11.2.0.2/grid/crs/install/crsconfig_params is valid

Creation of ASM instance on nodes 2 and 3:

[root@ora11-2-2 ~]# grep "Start of resource \"ora.cluster_interconnect.haip\" Succeeded" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:56:50: Start of resource "ora.cluster_interconnect.haip" Succeeded

[root@ora11-2-3 ~]# grep "Start of resource \"ora.cluster_interconnect.haip\" Succeeded" $ORACLE_HOME/cfgtoollogs/crsconfig/rootcrs_ora11-2-*.log
2011-01-08 00:56:34: Start of resource "ora.cluster_interconnect.haip" Succeeded

The key thing to note is the times. The running of root.sh on ora11-2-2 started before ora11-2-3, but for whatever reason it got to the creation of the ASM instance on ora11-2-3 before it did on ora11-2-3.

I found it impossible to leave the system with the naming mismatch, so used rootcrs.pl to deconfigure Clusterware and re-ran root.sh, this time allowing it to finish on each node before starting the next. I ended with the ASM instance names matching the hostnames and got on with creating databases.

I haven’t tested this or dug deep enough into the code to be 100% sure of the above explanation, so if anyone has alternative suggestions then please share them.

“Oracle SETASMGIDWRAP” (An illegitimate Google Whack)

When creating a database on a new or seldom used version of Oracle I always like to use the “Generate Database Creation Scripts” option of DBCA so I can have a look at what it’s going to do for me before I decide which scripts I want to run. I recently saw the following line in one of these scripts generated by an 11.2 DBCA:

host /u01/grid/11.2.0/bin/setasmgidwrap o=/u01/app/oracle/product/11.2.0/db_1/bin/oracle 

The name is pretty self explanatory (it’s the step that changes the group ownership of the oracle binary from the software owner’s primary group (probably OINSTALL) to the OSASM group) but I thought I’d have a quick search on My Oracle Support to find out more…

No results found.

Nothing unusual there, since Metalink was downgraded to its present state I’ve become accustomed to this message :). Let’s try the Oracle documentation:

Your search term oracle setasmgidwrap did not match any topics.

Surely Google will hit something…

Your search – oracle setasmgidwrap – did not match any documents

No hits again. So I’ve created this post purely to create an illegitimate Oracle Google Whack. At least once this post is indexed by the search engines. It’s illegitimate by the way because:

A Googlewhack must consist of two actual words found in a dictionary.

Below is a demonstration of SETASMGIDWRAP in action (watch the group ownership on the oracle binary):

%*> ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x    1 oracle   oinstall  260199063 Jan 12 15:15 /u01/app/oracle/product/11.2.0/db_1/bin/oracle
%*> $CRS_HOME/bin/setasmgidwrap o=$ORACLE_HOME/bin/oracle
%*> ls -l $ORACLE_HOME/bin/oracle
-r-sr-s--x    1 oracle   asmadmin  260199063 Jan 12 15:15 /u01/app/oracle/product/11.2.0/db_1/bin/oracle 

11.2 Oracle Restart – Changing Hostname

It has taken me a good deal longer that I would have expected to change the hostname on some cloned VMs that are home to Oracle 11.2 databases running on ASM with Oracle Restart.

I was clearly not alone in thinking that it would just be a case of running localconfig delete/localconfig add, as I have done in the past for hosts running Oracle database 10.2 with ASM for storage. However, things have changed and having done a fair amount of searching using both MetaLink (My Oracle Support) and Google it appears that there isn’t much out there yet in the form of clear instructions. The best I could find was My Oracle Support article ID 887658.1, which got me moving in the right direction.

So, in the hope of saving others the time I have lost working this out, here are the steps I used to rename a Linux host running Oracle Database 11.2 with ASM in an Oracle Restart configuration. The steps have been written for an installation that splits the ownership of the “Grid Infrastructure” and the database between a user named grid and a user named oracle respectively. It is important who you run at least some of these commands as, so please check which user you are before each command. Or more likely, if things don’t look to be working then check which user you ran the command as before going further. It is easy to backtrack if you do get it wrong.

Remove the existing configuration as root (with ORACLE_HOME set to the grid home):

[root@old-host grid]# $ORACLE_HOME/perl/bin/perl -I $ORACLE_HOME/perl/lib -I $ORACLE_HOME/crs/install $ORACLE_HOME/crs/install/roothas.pl -delete

* In some of my earlier attempts I stopped everything gracefully, but it does not appear to be necessary.

Change the value of HOSTNAME in the network file using the editor of your choice.

[root@old-host ~]# vi /etc/sysconfig/network

Update /etc/hosts as appropriate

[root@old-host ~]# vi /etc/hosts

Change the IP address in ifcfg-<interface> as required

[root@old-host ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

Change the hostname in the listener configuration file

[root@old-host ~]# vi $ORACLE_HOME/network/admin/listener.ora

* Clearly you don’t need to be root to do the previous command, but as reboot is next it doesn’t seem worth switching user.

Reboot to pickup hostname change

When the host has restarted run roothas.pl as root (with ORACLE_HOME set to the grid home)

[root@new-host grid]# $ORACLE_HOME/perl/bin/perl -I $ORACLE_HOME/perl/lib -I $ORACLE_HOME/crs/install $ORACLE_HOME/crs/install/roothas.pl

The output should look something like:

2009-12-19 15:10:36: Checking for super user privileges
2009-12-19 15:10:36: User has super user privileges
2009-12-19 15:10:36: Parsing the host name
Using configuration parameter file: /u01/app/grid/product/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node new-host successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting

2009/12/19 15:11:58     /u01/app/grid/product/11.2.0/grid/cdata/new-host/backup_20091219_151158.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server

Exit from root user.

You now need to add the listener, ASM, database and services (if applicable) into the Oracle Restart configuration.

[grid@new-host ~]$ srvctl add listener
[grid@new-host ~]$ srvctl add asm -d '/dev/sd[a-h]1'
[grid@new-host ~]$ srvctl start listener
[grid@new-host ~]$ srvctl start asm

* Your asm_diskstring, specified with the -d option to “srvctl add asm”, with almost certainly be different!

Note that the diskgroups are not automatically added back to the Oracle Restart configuration (as shown by the output of “srvctl status” below) and must be manually started:

[grid@new-host ~]$ srvctl status diskgroup -g data
PRCR-1001 : Resource ora.DATA.dg does not exist

[grid@new-host ~]$ sqlplus /nolog
SQL> conn / as sysasm
Connected.
SQL> alter diskgroup data mount;

Diskgroup altered.

SQL> exit

[grid@new-host ~]$ srvctl status diskgroup -g data
Disk Group data is running on new-host
[grid@new-host ~]$

Adding the database to Oracle Restart:

[oracle@new-host ~]$ srvctl add database -d ora11gr2 -o $ORACLE_HOME -n ora11gr2 -p +DATA/ora11gr2/spfileora11gr2.ora -a DATA
[oracle@new-host ~]$ srvctl config database -d ora11gr2
Database unique name: ora11gr2
Database name: ora11gr2
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: grid
Spfile: +DATA/ora11gr2/spfileora11gr2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services:
[oracle@new-host ~]$ srvctl start database -d ora11gr2
[oracle@new-host ~]$ srvctl status database -d ora11gr2
Database is running.
[oracle@new-host ~]$

That should be it. The only other place to update the home name is in the tnsnames.ora file.