COL$.PROPERTY = 1073741824

At the point I started writing this post it seemed the problem I’d recently encountered was not directly documented. It is documented, but I didn’t find the relevant MOS notes until I had already found what the problem was. Hopefully this post will save others some time.

The problem was reported to me as an ORA-14097 on partition exchange when the tables have the same column definitions.

For those that don’t have this particular Oracle error number committed to memory, the following is the oerr output:

14097, 00000, "column type or size mismatch in ALTER TABLE EXCHANGE PARTITION"
// *Cause:  The corresponding columns in the tables specified in the 
//          ALTER TABLE EXCHANGE PARTITION are of different type or size
// *Action: Ensure that the two tables have the same number of columns 
//          with the same type and size.


The obvious things to check are column order, column data type, column size and column constraints. This had already been done by the person reporting the issue, so it was time to take a deeper look. I was assured that the same partition exchange had worked in other copies of the same databases and was failing for the first time in this particular copy.

Here’s a quick example of the problem situation (recreated with simplified tables)…

Describing both the partitioned (P) and non-partitioned table (T) in the exchange

SQL> desc t 
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 C1                                                 NUMBER
 C2                                        NOT NULL VARCHAR2(1)

SQL> desc p
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 C1                                                 NUMBER
 C2                                        NOT NULL VARCHAR2(1)

SQL>       

No visible difference in the describe output, but the exchange fails…

SQL> alter table p exchange partition p_y with table t;
alter table p exchange partition p_y with table t
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION


SQL> 

Querying SYS.COL$ showed there were two COL$ columns that had different values for columns between the tables being exchanged: Two columns in the tables had different values for DEFLENGTH (length of column default value definition) and one had a different value for PROPERTY. I spent a bit of time working out why DEFLENGTH was different[1], and after satisfying myself that it wasn’t responsible for the ORA-14097 I turned my attention to PROPERTY. One table had a value of 0 for all columns and the other had a value of 0 for all but one, which was 1073741824.

Using the demo tables to demonstrate difference in SYS.COL$.PROPERTY

SQL> select name, property
  2    from col$
  3   where obj# = (select object_id
  4		      from dba_objects
  5		     where owner = 'DEMO'
  6		       and object_type = 'TABLE'
  7		       and object_name = 'T')
  8  minus
  9  select name, property
 10    from col$
 11   where obj# = (select object_id
 12		      from dba_objects
 13		     where owner = 'DEMO'
 14		       and object_type = 'TABLE'
 15		       and object_name = 'P')
SQL> /

NAME                             PROPERTY
------------------------------ ----------
C2                             1073741824

SQL> 

It turns out that if I’d searched for “1073741824 ORA-14097 COL$.PROPERTY” then I would have found this which clearly identifies the problem. However, it was MOS ID 1112544.1 “Streams Capture Failing With ORA-26744 And ORA-26766” that gave me what I needed regarding how a column comes to have a property value of 1073741824. As the note states:

There is a column added to the table with a non-null default value.

What I hadn’t realised at this point is that COL$.PROPERTY was not always set to 1073741824 when a non-null default value column is added to a table and that this is a result of the 11g ADD COLUMN optimisation. The person that pointed this out also explained a more interesting bug that was introduced with the feature and hopefully he’ll blog about that soon.

Anyway, I’ve since found the following MOS notes which cover the situation, but if you don’t know the history of the table and you don’t know that a column with COL$.PROPERTY of 1073741824 means it was added after initial creation (with a default value and not null constraint) then you don’t know that you’re hitting “ORA-14097 At Exchange Parttion After Adding Column With Default Value”.

Relevant MOS Notes

  • Common Causes of ORA-14097 At Exchange Partition Operation [ID 1418545.1]
  • ORA-14097 At Exchange Parttion After Adding Column With Default Value [ID 1334763.1]

These notes both cover a workaround using event 14529 if CTAS is being used to create the new table.
__________
1 – The value of DEFAULT_LENGTH for a column depends on whitespace as shown below:

SQL> create table a (c1 number default 1,
  2                  c2 number default 1 not null,
  3                  c3 number default 1     not null, -- tab
  4                  c4 number default 1     not null, -- spaces
  5                  c5 number default 1not null
  6                 )
  7  /

Table created.

SQL> select column_name, default_length                                                    
  2    from user_tab_cols
  3   where table_name = 'A'
  4   order by 1
  5  /

COLUMN_NAME		       DEFAULT_LENGTH
------------------------------ --------------
C1                                          1
C2                                          2
C3                                          2
C4                                          6
C5                                          1

SQL> 

It makes no difference to partition exchange, but it caught my attention as a difference between the tables that I wanted to understand.

Interesting Change in Database Listener Behaviour at 11g

There is a small yet significant improvement in the implementation of TNSLSNR at 11g around the way it writes to the listener log files, both “log.xml” and “listener.log”. First though I’ll describe some 10g listener behaviour that anyone who’s written their own script for recycling the log file will have noticed. I should also mention this is all Unix based – I’m not aware if the same issues exist in a Windows environment.

Below I check the listener log file and then rename it “listener.log.old”.

$ cd /u01/app/oracle/product/10.2.0/db_1/network/log
$ ls -l listener*
-rw-r----- 1 oracle oinstall 3715 Jun 19 22:33 listener.log
$ mv listener.log listener.log.old

Next I connect to a database via the listener and check for files starting with “listener” in the log location.

$ sqlplus a/a@orcl
SQL> exit

$ ls -l listener*
-rw-r----- 1 oracle oinstall 3964 Jun 19 22:34 listener.log.old

$ tail -1 listener.log.old
19-JUN-2011 22:34:48 * (CONNECT_DATA=(SERVICE_NAME=orcl)(CID=(PROGRAM=sqlplus)(HOST=vb-centos-10.2-a)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.105)(PORT=24980)) * establish * orcl * 0

There is no new “listener.log” and my recent connection was logged in the “listener.log.old” file.

I can go one step further and remove the file while the listener is running and while I am “tailing” the file.

### session 1
$ tail -f listener.log.old

### session 2
$ rm listener.log.old
$ sqlplus a/a@orcl

### session 1
19-JUN-2011 22:38:12 * (CONNECT_DATA=(SERVICE_NAME=orcl)(CID=(PROGRAM=sqlplus)(HOST=vb-centos-10.2-a)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.105)(PORT=25010)) * establish * orcl * 0

Log entries are still going to the old log file even though the file is not visible from the operating system.

$ ls -l listener*
ls: listener*: No such file or directory

If I left it like this I could end up with a huge listener.log file that is essentially invisible. If ever you have “du” and “df” output that disagree by a huge amount in an Oracle home filesystem then this may be why. You should never see it in the ADR though as that would imply 11g. Before we go on to that we’ll restart the listener logging so the correct file is used (there are many ways to do this by the way).

$ lsnrctl

LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 19-JUN-2011 22:44:43

Copyright (c) 1991, 2007, Oracle.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> set current_listener listener
Current Listener is listener
LSNRCTL> set log_status off
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vb-centos-10.2-a)(PORT=1521)))
listener parameter "log_status" set to OFF
The command completed successfully
LSNRCTL> set log_status on
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vb-centos-10.2-a)(PORT=1521)))
listener parameter "log_status" set to ON
The command completed successfully

Now let’s demonstrate what happens if we repeat this test at 11g (11.2 in my case but the behaviour is the same at 11.1).

$ cd /u01/app/oracle/diag/tnslsnr/vb-centos-11/listener/trace
$ ls -l listener*
-rw-r----- 1 oracle oinstall 31486 Jun 16 18:36 listener.log
$ mv listener.log listener.log.old

$ sqlplus  a/a@orcl
SQL> exit

$ ls -l listener*
-rw-r----- 1 oracle oinstall 243 Jun 16 18:36 listener.log
-rw-r----- 1 oracle oinstall 31486 Jun 16 18:36 listener.log.old

$ cat listener.log
Thu Jun 16 18:36:56 2011
16-JUN-2011 18:36:56 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl)(CID=(PROGRAM=sqlplus)(HOST=vb-centos-11.2-a)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.102)(PORT=49695)) * establish * orcl * 0

So this time the listener has recreated “listener.log” file and not continued to write to the renamed log file. If we use the strace utility we can see why. An extract from the 11g output is below. Notice the “open” commands for the “log.xml” and “listener.log” files.

lstat64("/u01/app/oracle/diag/tnslsnr/vb-centos-11/listener/alert/log.xml", {st_mode=S_IFREG|0640, st_size=90949, ...}) = 0
open("/u01/app/oracle/diag/tnslsnr/vb-centos-11/listener/alert/log.xml", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0660) = 15
fcntl64(15, F_SETFD, FD_CLOEXEC)        = 0
write(15, "<msg time='2011-06-16T18:12:17.1"..., 236) = 236
close(15)                               = 0
stat64("/u01/app/oracle/diag/tnslsnr/vb-centos-11/listener/alert/log.xml", {st_mode=S_IFREG|0640, st_size=91185, ...}) = 0
times({tms_utime=118, tms_stime=1077, tms_cutime=0, tms_cstime=0}) = 434969334
lstat64("/u01/app/oracle/diag/tnslsnr/vb-centos-11/listener/trace/listener.log", {st_mode=S_IFREG|0640, st_size=1021, ...}) = 0
open("/u01/app/oracle/diag/tnslsnr/vb-centos-11/listener/trace/listener.log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0660) = 15
fcntl64(15, F_SETFD, FD_CLOEXEC)        = 0
write(15, "Thu Jun 16 18:12:17 2011\n", 25) = 25

And below an extract from tracing a 10g listener.

_llseek(3, 0, [4511], SEEK_CUR)         = 0
write(3, "19-JUN-2011 22:41:01 * service_u"..., 49) = 49

This time we do not get an open for each write – we only see the file reopened if we restart logging.

open("/u01/app/oracle/product/10.2.0/db_1/network/log/listener.log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 3

Let’s hope the next improvement is to take the cycling of the trace “listener.log” file (not xml version) into the code.

SYS and Password Resource Limits

In the process of writing another post I have stumbled into something that has really got me interested. I’ll crack on with the original post soon, but right now I have an uncontrollable urge to share this.

Unless I’m missing something then there has been a fundamental change to the enforcement of “password resource limits” for SYS during the move from 10.2 to 11.1… It might be that this is documented somewhere. I have looked and didn’t find any mention of it. Although, I have to admit that I haven’t read the 11.1 documentation from start to finish, or even the “New Features Guide” in its entirety.

It was quite a while ago that I became aware of the fact that SYS is not subject to password expiry and that this is expected behaviour, as detailed in MOS article ID: 289898.1. This caused me minor concern as it robbed me of a tool to force the hand of the DBA team towards regularly changing the SYS password. But, what I discovered yesterday has potentially serious implications for those of us that care about database security. Please see my findings below…

For the purposes of the example we have 2 users of interest SYS and MARTIN, both have the DEFAULT profile provided by Oracle, with the edition of a password verification function. Watch what happens as you move Oracle version 10.2 to 11.1…

10.2.0.4

[oracle@ora-play ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 20 06:28:57 2010

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL> conn / as sysdba
Connected.
SQL> select username, profile from dba_users where username in ('SYS','MARTIN');

USERNAME		               PROFILE
------------------------------ ------------------------------
MARTIN			               DEFAULT
SYS                            DEFAULT

SQL> select limit from dba_profiles where resource_name = 'PASSWORD_VERIFY_FUNCTION' and profile = 'DEFAULT';

LIMIT
----------------------------------------
NULL

SQL> @password_function.pls

Function created.

SQL> alter profile default limit password_verify_function password_function;

Profile altered.

SQL> alter user martin identified by simple;
alter user martin identified by simple
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20002: Password length less than 8


SQL> alter user sys identified by simple;
alter user sys identified by simple
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20002: Password length less than 8


SQL> 

Nothing too radical there. Now moving to 11.1…

11.1.0.7

[oracle@ora-play ~]$ sqlplus /nolog

SQL*Plus: Release 11.1.0.7.0 - Production on Thu May 20 07:00:14 2010

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

SQL> conn / as sysdba
Connected.
SYS> select username, profile from dba_users where username in ('SYS','MARTIN');

USERNAME                       PROFILE
------------------------------ ------------------------------
SYS                            DEFAULT
MARTIN                         DEFAULT

SYS> select limit from dba_profiles where resource_name = 'PASSWORD_VERIFY_FUNCTION' and profile = 'DEFAULT';

LIMIT
----------------------------------------
VERIFY_FUNCTION_11G

SYS> @password_function.pls

Function created.

SYS> alter profile default limit password_verify_function password_function;

Profile altered.

SYS> alter user martin identified by simple;
alter user martin identified by simple
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20002: Password length less than 8


SYS> alter user sys identified by simple;

User altered.

SYS>

SQL> alter profile default limit password_verify_function  password_function;

Profile altered.

SQL> alter user martin identified by simple;
alter user martin identified by simple
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20002: Password length less than 8


SQL> alter user sys identified by simple;

User altered.

SQL>

*Note that I replaced the 11g password verification function with my own to be consistent in the testing.

Did you spot that? SYS is not affected by the constraints of the password verification function in 11.1, but is in 10.2. I have confirmed that the 11.1 behaviour is still present at 11.2, in fact I first experienced it there. I’ve tested in both 11.1.0.6 and 11.1.0.7 to confirm the same behaviour in both.

“So what?” I hear some of you say… But, how can I now stand in front of an auditor and say, “All passwords used in our databases comply with the corporate password policy, as enforced by the password verification function.”? OK, there are always going to be ways that someone who wants to set a very simple password can (temporarily changing profile, for example), so there aren’t really any guarantees, but given that setting simple passwords is a “lazy” approach1, by making it harder to set a simple password than a complex one the “lazy” guy will accept the situation and use a complex password… Right?

I can’t work out why Oracle would choose to do this (although the next post might provide a hit of the reason).

1 Sorry if that offends anyone.

Extended RAC Cluster

Introduction

I’ve just finished building an extended RAC cluster on Oracle VM following the instructions written by Jakub Wartak. I can’t claim it was plan sailing, so I’m listing the issues I encountered here in the hope that it helps someone else.

Before I start with the issues I want to thank Jakub for making his article available. After first seeing it about 6 or 7 months ago I wanted to get some kit to play with… It took a while for me to decide on what to order and there were other distractions to attend to, but I ordered the following a couple of weeks ago.

  • Asus V3-M3N8200 AM2 Barebone
  • AMD Phenom X4 9350e
  • Kingston DDR2 800MHz/PC2-6400 HyperX Memory (8GB)
  • Western Digital WD5000AAKS 500GB SATA II x 3

My management box for Oracle VM Manager and NFS for 3rd voting disk is an old Compaq EVO D510 SFF (2.0GHz, 512MB RAM, 80G HDD) – It’s worth noting that Oracle state that Oracle VM Manager 2.1.2 requires 2GB RAM, but I’ve managed with 512MB.

A question I asked myself before completing the installation(s) and something that I’ve been asked by a couple of colleages is, “Is it possible to install Oracle VM Server and not use Oracle VM Manager?” The answer to which seems like a definite YES.

Oracle VM Manager Installation

I am running Oracle Enterprise Linux 5 and the Oracle VM Manager installed with no issues. The only slight gotcha was the installer complaining about insufficient swap space. This was something I hit the second time I was installing Oracle VM Manager and on investigation was due to swap space being allocated. I shut a few things down and ran a quick “swapoff -a; swapon -a”.

Oracle VM Server Installation

This is where the majority of my time has been spent. The first issue I hit was the installer not being able to see my 3 SATA disks. After a fair amount of frustration and reading I discovered the Linux boot option of *pci=nomsi*. This combined with setting my BIOS to treat the disks at *AHCI* rather than SATA resolved this issue.

The next problem was stopping my machine (all brand new kit) from rebooting. I was probably a bit slow to work this one out, but it turns out that one of my four 2GB RAM sticks was bad and as soon as I pin pointed the problem and just stuck to 6GB I could move on. Based on this experience discussion with my Sys Admin colleagues I’d recommend running memtest86 from the Oracle VM Server installation CD on your machine before attempting the installation.

Well not quite. There was one more issue holding me back from the stuff I really wanted to be doing. I don’t know if this can be explained by different version of Oracle VM templates or Oracle VM Server, but it turns out that I was hitting bug 223947. The symptons were the below messages showing up on the console for my VM Server.

raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 5
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2490174971 4
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921017 5
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921022 5
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921020 5
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921022 5
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921022 5
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921019 11
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521988084 8
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521993716 8
raid0_make_request bug: can't convert block across chunks or bigger than 256k 2521921022 5

Maybe the version of the VM template that Jakub used did not use LVM? Anyway moving to a disk configuration that relied on RAID 1 and RAID 5 got me around this issue.

Creating the Openfilers

When I attempted to start up the Openfilers for the first time I received an error relating to the bridge sanbr0.

Error: Device 1 (vif) could not be connected. Could not find bridge device sanbr0

To reslove this I just skipped on a bit to the section that sets up the bridges and run those commands earlier that specified.

brctl addbr sanbr0
ip link set dev sanbr0 up

Once I’d got the Openfiler VMs running and followed the instructions Jakub provided for configuration I experienced a peculiar issue with the webpage. When logging in I was not being taken to the “Administration Section”, but instead to “Home”. On the Home page there is a link “administer the storage device from here.”, which when clicked took me back to the Home page. I did a bit of searching and found a post on the Openfiler forum. This didn’t really give me much to go on, but I left everything running whilst I went to work and returned to find the same problem… I then tried restarting Firefox and hey presto, it worked. I don’t have a good answer to why it worked other then something cache related.

Configuration of Oracle Enterprise Linux VMs

Use of quotation marks in echo “MTU=9000” >> /etc/sysconfig/network-scripts/ifcfg-eth1 (etc) caused an issue when restarting the network service and the quotation marks needed to be removed from the file.

Update: The above issue is due to copy & paste from HTML to shell – the double quotation marks in the HTML are not translated to “simple” double quotation marks in shell as shown below (thanks Jakub):

     [vnull@xeno ~]$ echo “MTU=9000” | cat -v
     M-bM-^@M-^\MTU=9000M-bM-^@M-^]
     [vnull@xeno ~]$ echo "MTU=9000" | cat -v
     MTU=9000

My Oracle Enterprise Linux VMs did not have a /dev/hdd, so a ran fdisk -l to discover /dev/xvdb, which can also be seen in the vm.cfg file. I assume that this has changed in the VM templates since Jakub downloaded his.

The iSCSI disks were presented differently than described the article, which I believe is a result of something not going to plan in the /etc/udev/scripts/iscsidev.sh script, but I don’t know this for sure. I became aware of the problem when running the script to partition the iSCSI disk as errors were generated. fdisk -l showed me disks sda – sdf, so I just created partitions on these and used them directly with any problems to date (it’s only been 3 days). The output below might be helpful in working out what has gone wrong.

[root@erac1 ~]# ls -l /dev/iscsi/
total 0
drwxr-xr-x 2 root root 380 Mar 1 19:09 lun
[root@erac1 ~]# ls -l /dev/iscsi/lun
total 0
lrwxrwxrwx 1 root root 12 Mar 1 19:09 part -> ../../../sdf
lrwxrwxrwx 1 root root 12 Mar 1 19:09 part0 -> ../../../sg0
lrwxrwxrwx 1 root root 13 Mar 1 19:09 part1 -> ../../../sde1
lrwxrwxrwx 1 root root 14 Mar 1 19:08 part10 -> ../../../ram10
lrwxrwxrwx 1 root root 14 Mar 1 19:08 part11 -> ../../../ram11
lrwxrwxrwx 1 root root 14 Mar 1 19:08 part12 -> ../../../ram12
lrwxrwxrwx 1 root root 14 Mar 1 19:08 part13 -> ../../../ram13
lrwxrwxrwx 1 root root 14 Mar 1 19:08 part14 -> ../../../ram14
lrwxrwxrwx 1 root root 14 Mar 1 19:08 part15 -> ../../../ram15
lrwxrwxrwx 1 root root 12 Mar 1 19:09 part2 -> ../../../sg2
lrwxrwxrwx 1 root root 12 Mar 1 19:09 part3 -> ../../../sg3
lrwxrwxrwx 1 root root 12 Mar 1 19:09 part4 -> ../../../sg4
lrwxrwxrwx 1 root root 12 Mar 1 19:09 part5 -> ../../../sg5
lrwxrwxrwx 1 root root 13 Mar 1 19:08 part6 -> ../../../ram6
lrwxrwxrwx 1 root root 13 Mar 1 19:08 part7 -> ../../../ram7
lrwxrwxrwx 1 root root 13 Mar 1 19:08 part8 -> ../../../ram8
lrwxrwxrwx 1 root root 13 Mar 1 19:08 part9 -> ../../../ram9

From looking at the script and later in the instructions I would have expected a the lun directory to had a digit at the end. As this isn’t currently causing me any issues I’ve not looked into it further.

Third Voting Disk

During installation of Oracle Clusterware I received an error when specifying 3 locations for my voting disks.

The location /votedisk/third_votedisk.crs, entered for the Additional Cluster Synchronization Services (CSS) voting disk is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system file that is visible by the same name on all nodes of the cluster.

I continued the installation with only one voting disk and went back after the installation to work out what the issue was. It turned out to be a permissions problem and I needed to modify the options in /etc/exports as show below.

/votedisk *(rw,sync,all_squash,anonuid=500,anongid=500)

to

/votedisk *(rw,sync,all_squash,anonuid=500,anongid=501)

The permissions of the third_votedisk.crs file also required changing to match the “anon” settings, which in my case due to differing UID and GID values on the Oracle VM Manager box meant setting the following permissions.

[martin@ora-vmm ~]$ ls -l /votedisk/third_votedisk.crs
-rw-r----- 1 martin dba 335544320 Mar 1 20:04 /votedisk/third_votedisk.crs

The important thing is not what the permissions show as locally, but how they appear on the RAC nodes, i.e.:

[oracle@erac1 ~]$ ls -l /votedisk/third_votedisk.crs
-rw-r----- 1 oracle oinstall 335544320 Mar 1 2009 /votedisk/third_votedisk.crs

I assume that the group read permission could be safely removed if deemed desirable from a security point of view.

Enterprise Manager

Near the end of the database installation I received an error regarding Enterprise Manager, which I don’t recall the details of, but I can access the Enterprise Manager console and things seems to work so far. I’ll update the post if I discover any issues.