Deterministic Service Failover

Until some testing about 6 months ago I was under the impression that if a database service was defined as being “preferred” on 1 instance of a RAC database and “available” on N other instances (where N is greater than 1); and the preferred instance is killed then the instance that the service moves to is not deterministic, i.e., Oracle picks one of the available instances to start the service on in a unpredictable way. I cannot work out what gave me this impression, so will just have to put it down to an assumption I made at some point in the past and never validated. From conversations with other DBAs I was not alone.

Anyway, I now believe that the failover it deterministic as demonstrated below.

Creating some test services

$ srvctl add service -d DB_TST -s DB_TST_SVC1 -r "DB_TST1" -a "DB_TST2,DB_TST3,DB_TST4"
$ srvctl add service -d DB_TST -s DB_TST_SVC2 -r "DB_TST1" -a "DB_TST3,DB_TST4,DB_TST2"
$ srvctl add service -d DB_TST -s DB_TST_SVC3 -r "DB_TST1" -a "DB_TST4,DB_TST2,DB_TST3"

Verifying the state of the services

$ srvctl status service -d DB_TST
Service DB_TST_SVC1 is running on instance(s) DB_TST1
Service DB_TST_SVC2 is running on instance(s) DB_TST1
Service DB_TST_SVC3 is running on instance(s) DB_TST1

Killing instance 1

At this point I killed instance 1 using “shutdown abort” in SQL*Plus.

Checking the status of the services

$ srvctl status service -d DB_TST
Service DB_TST_SVC1 is running on instance(s) DB_TST2
Service DB_TST_SVC2 is running on instance(s) DB_TST3
Service DB_TST_SVC3 is running on instance(s) DB_TST4

As you can see the service failed over to the instance based on the order the instances are defined in the “available” list. I did a few more tests to convince myself that this wasn’t just a one off, but won’t fill this post with the output.

If you’re reading this and thinking, “Yes, that was obvious.” Sorry!

During the testing I ran into something else that appears very trivial, but worth being aware of… When is shutdown abort not an immediate abort.

Advertisements

Access to V$RESTORE_POINT

I have recently been working in an environment that uses guaranteed restore points (GRPs) more than anywhere else I’ve ever worked and therefore I’ve found myself querying V$RESTORE_POINT a lot. Or at least I’ve been trying to. The client also takes onboard the wisdom that granting SELECT_CATALOG_ROLE to those that don’t need all it has to offer is not a good idea. Therefore they have created a new role to provide the level of access that is deemed appropriate for users such as me in their production and UAT environments. When I first tried to access V$RESTORE_POINT in such an environment I didn’t properly absorb the error in front of me:

select name from v$restore_point
                 *
ERROR at line 1:
ORA-01031: insufficient privileges

Initially I incorrectly assumed that my user did not have access to V_$RESTORE_POINT, but of course that would have resulted in “ORA-00942: table or view does not exist”. When I checked and found that I did have SELECT on V_$RESTORE_POINT through the custom role I realised that I was going to have to look a little deeper.

The weeks went by and those weeks turned into months…

Last week I finally found the time to take a proper look. What I discovered surprised me and I’m still looking to understand why Oracle would choose to implement access to [G]V$RESTORE_POINT in this way…

The output below and accompanying comments demonstrate how access to V$RESTORE_POINT is controlled

SYS can SELECT from V$RESTORE_POINT (no surprises):

SYS@orcl> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
PRE_SELECT_CATALOG_ROLE_TEST

SYS@orcl> 

A user with only CREATE SESSION and SELECT_CATALOG_ROLE can SELECT from V$RESTORE_POINT (no big surprises)

SYS@orcl> create user scr_test identified by scr_test01;

User created.

SYS@orcl> grant create session to scr_test;

Grant succeeded.

SYS@orcl> grant select_catalog_role to scr_test;

Grant succeeded.

SYS@orcl> conn scr_test/scr_test01
Connected.
SCR_TEST@orcl> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
PRE_SELECT_CATALOG_ROLE_TEST

SCR_TEST@orcl> 

So now for something a little radical…

Drop the SELECT_CATALOG_ROLE and recreate it with no privileges, then grant the empty role to SCR_TEST

SCR_TEST@orcl> conn / as sysdba
Connected.
SYS@orcl> drop role select_catalog_role;

Role dropped.

SYS@orcl> create role select_catalog_role;

Role created.

SYS@orcl> grant select_catalog_role to scr_test;

Grant succeeded.

SYS@orcl>

… and then test SELECT from V$RESTORE_POINT as SCR_TEST

SYS@orcl> conn scr_test/scr_test01
Connected.
SCR_TEST@orcl> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
PRE_SELECT_CATALOG_ROLE_TEST

SCR_TEST@orcl> 

What you read above isn’t the first thing I tried when attempting to work out what privileges I would need in order to access V$RESTORE_POINT. My earlier activities involved SQL Trace, but that didn’t get me too far as I found myself in numerous recursive queries on tables like OBJAUTH$. Deciphering that will have to wait for another day. Until then it seems sufficient to know that you need to at least be granted a role named SELECT_CATALOG_ROLE in order to select from v$restore_point, even if that role has no privileges itself!

This has similarities to a post from Neil on SELECT_CATALOG_ROLE being hardcoded into the definition of KU$_HTABLE_VIEW.

SELECT_CATALOG_ROLE isn’t hardcoded into the definition of GV$RESTORE_POINT, which is based on X$KCCRSP and X$KCCNRS, so I can only assume that it is hardcoded into these “fixed tables”.

According to this un(officially)published Oracle support note X$KCC is Kernel Cache Control file management and maybe RSP is ReStore Point and NRS is Named ReStore point?

OOW: Is It Worth It?

This is a little later than most Oracle OpenWorld 2012 write-ups as this year I extended what has become my annual trip to San Francisco and spent another 2 weeks in the US.

The vast majority of the “Oracle geeks” I know in the UK have never been to OpenWorld and therefore from time to time, generally as the event is approaching, I get asked: Is it worth it?

If you want the one word answer: Yes. If you want the justification then please read on…

I feel the most important point for those in the UK, regardless of how into the technology you are, is: It’s in San Francisco!

This might sound like I think the other aspects of attending are not as important, but that is not what I’m attempting to convey. The point is that if you are a hardcore Oracle geek then what better location than San Francisco to learn more about the software you spend so much of your life working/playing with. At the other end of the scale, if you’re not really interested in the technical detail of the software, but need to be aware of what is going on with one of the major players in the IT world then what better location to do so than San Francisco… OK, I’ve not travelled very widely in the US, but of the places I’ve visited San Francisco is definitely holding the top spot. I just really like the vibe of the city and don’t recall meeting anyone that doesn’t enjoy the location element of OpenWorld.

OK, so the next justification for those that need more incentive: It’s a great networking opportunity. It must be the highest concentration of Oracle geeks any point in the year. The people who wrote the Oracle related books you really like will probably be there and they’ll possibly be presenting. The people who write the blogs you read, and eagerly await, will probably be there and are possibly presenting at OpenWorld or one of the other events that are on at the same time in nearby locations (more on which below).

The Bloggers’ Meetup is a must for those that blog themselves or those who follow many blogs and want to meet the people that write them. Thanks to Pythian and OTN for putting this event on.

The finally point: It is a fantastic environment to learn more about Oracle and associated technologies.

There is so much going on… I remember planning my first OpenWorld and picking something to attend for literally every slot in the schedule. I was keen to make the most of the conference and despite reading and hearing the advice of others to not try to do too much, I did. There are often two or more sessions in the same slot that I want to attend and what made this even more frustrating in my first year were the “marketing trap” sessions. The ones that sound great, but as I recall saying at the time, “I don’t come here to listen to someone give me an overview of functionality or recite the data sheets.” I want the “war stories”, internals and undocumented stuff. I understand that attendees vary in both their existing knowledge and what they came to OpenWorld to gain, so I can understand why these sessions exist, but bear the following in mind:

  1. Use the Schedule planner to select your sessions in advance and do your research on the presenters
  2. People’s view of what constitues a “Deep Dive” varies
  3. Look for sessions from people’s who you know you like listening to from previous presentation or who write blogs you enjoy
  4. Do the best you can to ensure the level of prior knowledge or level of detail in the presentation is appropriate to you
  5. Accept the fact that you’re not going to be able to attend every session you would like to be at due to scheduling conflicts
  6. The User Group Sunday technical sessions are a must, so make sure you arrive in time



Other Parts of OpenWorld

There are parts of OpenWorld that I haven’t really found time for, but would probably be of great value to some. There’s the 2 large halls of exibitions with a fairly diverse range of exhitors. If you’re considering a vendor, or already use one, then these could present you with a good opportunity to speak directly with employees of that company. Within the same halls, and some other locations, you’ll find the Oracle Demo Grounds. I’ve never actually attended a demo session, but if you want to know more about a particular Oracle product or features then this is going to be a good place to do so. It will give you a chance to speak with the Oracle employees that are directly involved in the product/feature you are interested in.

You get access to both of the above with a $125 (2012) “Discover” pass, which is a huge saving over the full conference pass.


The “Side Events” for 2012 – That really does not do them justice!

Enkitec Sessions @ Jillian’s – Enkitec has focused heavily on Exadata expertise and therefore if you want to know about Exadata then listening to the likes of Andy Colvin, Kerry Osbourne, Tanel Poder and Tim Fox is a very good idea. The atmosphere was very informal with a focus on live demos and hacking sessions. Beer was provided as was finger food. One point worth noting is that the staff at Jillian’s were telling people that they were closed, so you needed to explicitly state that you were there to visit Enkitec. Don’t let this put you off. Once inside it was a very friendly place.

illumos and ZFS Days – This might not be for everyone as it focuses on system administration rather than Oracle Database server, but some of the presenters at the event are likely to be names you have come across over the years, particularly if you have a Solaris background. Unfortunately I wasn’t able to spend as much time as I would have liked here, but I believe everything was recorded, so I hope to catch up over the coming weeks. Check out the following links if you want to see the available videos illumos Day and ZFS Day.

Oaktable World – I was lucky enough to find out about Oracle Closed World for my 2nd trip to OpenWorld. This is definitely a place to visit if you want to get into the technical details. It was very disappointing that “Closed World” did not happen at OpenWorld 2011, but these things take time and money to arrange. Thankfully it was back with a vengence this year thanks to the hard work of Kyle Hailey (who looked to be very busy making sure everything went smoothly) and sponsorship from Delphix and Pythian. The event was renamed to OakTable World this year and the line up can be found here. Most of the sessions were recorded. At the moment only a selection are available on the site and Tanel’s is available here. Hopefully more will be edited and made available in due course.


Closing Note

If you do attend Oracle OpenWorld next year then Twitter is definitely your friend. It is a great way of finding out what is going, where the people you know are and what the news coming out of the sessions you couldn’t attend is.

Hope to see you there! @martinpaulnash

OOW: So Much To Do

Just a quick blog post to say that I’m out in San Francisco for the 4th year running to attend Oracle OpenWorld, and as much as I can fit in of the other events that happen at the same time!

There’s so much going on it can be pretty tricky to work out where the best place to be at any given time is. The User Group Sunday sessions are always a highlight so I’m looking forward to a busy day tomorrow.

I’ll be tweeting things that grab my attention more than blogging, so if that interests you then follow me there @martinpaulnash – I’d like a shorter Twitter handle, but every one I think of is already taken by someone that never tweets, and in some cases follows no-one!

If you can’t be at OpenWorld yourself then some of the content will be broadcast on YouTube so look out for that. As far as I know none of the techie sessions at OpenWorld will be streamed live, but if your interest extends beyond the Oracle Database Server then you may want to check out illumos Day and ZFS Day.

There may also be broadcasts from OakTable World. I didn’t see any details yet, but I’m pretty sure Tanel has used GotoMeeting during his “hacking sessions” in the past years.

SQL Trace Without Storage Worries

I’ve recently been involved in working on a SR with Oracle that involved a request for SQL Trace to be provided for a session that was affect by the reported issue. Clearly not an unreasonable request, but the problem was that we did not have a reproducible test case. I’m not going to cover the details of the actual problem, as that’s not the point of the post, but if you don’t know under what conditions you are going to generate a particular Oracle error (which you believe to be incorrectly raised) then you either trace every application session that could hit the problem or you limit your scope and hope you get lucky. I try to keep “hoping to get lucky” out of my daily work, but the option of tracing every session was simply not viable as it would generate too much data for us to manage at a file system level. What we needed was “flashback tracing” in order to retrospectively enable tracing for a session once it hit the error… In steps SQL Trace Buffering.

I first became aware of this functionality when attending Tanel Poder’s hacking session at Oracle Closed World in 2010, but have since discovered that Julian Dyke covered it in his Oracle Diagnostics presentation which Julian’s website shows as first given back in 2002 at UKOUG Conference!

I suppose that fact that it’s taken nearly two years since I became aware of the functionality for me to use it in anger could point to it not needing to be on the list of things every Oracle DBA needs to know, but it is a really cool feature and hence I want to spread the word.

Buffered SQL Trace allows you to allocate some memory to holding SQL Trace data for a session. The memory buffer will be written to in a circular fashion until such time as the buffering is turned off and at which point the contents will be written to disk.

Taking directly from Julian’s presentation:

Turn on the buffering

ALTER SESSION SET EVENTS 'immediate trace name trace_buffer_on level <level>';

*Where <level> is the size in bytes of the buffer

Turn off the buffering and flush to disk

ALTER SESSION SET EVENTS 'immediate trace name trace_buffer_off';

You can also set up the buffered tracing via ORADEBUG, which is the approach I used and will be shown later.

That was half the problem solved, we had a way of having the SQL Trace data captured, but not actually written to disk for all application session, and therefore we were not in danger of filling the DIAGNOSTIC_DEST.

The second half of the problem was trigging Oracle to flush the buffered SQL Trace data to disk when we hit one of the errors that was believed to be falsely raised. This is pretty straightforward as you can instruct Oracle to perform a “diagnostic action” when a particular error is encountered. There were 4 errors that we want to dump the trace data for so the approach was as shown below:

SQL> oradebug setorapid <process id for session we wanted to trace>
SQL> oradebug session_event sql_trace wait=true, bind=true trace_buffer_on(256000)
SQL> oradebug session_event <error number> trace name trace_buffer_off; name errorstack level 3
SQL> oradebug session_event <error number> trace name trace_buffer_off; name errorstack level 3
SQL> oradebug session_event <error number> trace name trace_buffer_off; name errorstack level 3
SQL> oradebug session_event <error number> trace name trace_buffer_off; name errorstack level 3

As we wanted this for every application session it was just a case of writing a statement that would generated the above for all the relevant sessions and spooling the commands to a file that could then be run.

As a final note, if you’ve not read Tanel’s posts on the new (at 11g) diagnostic framework then have a read of these two:

The full power of Oracle’s diagnostic events, part 1: Syntax for KSD debug event handling
The full power of Oracle’s diagnostic events, part 2: ORADEBUG DOC and 11g improvements

Orphaned Files in ASM

There may be a better way to do achieve this, but as I have found myself using it a number of times it seems worth sharing…

The problem of orphaned files can occur with or without ASM and I’m sure most readers have found themselves identifying database files that are present at a file system or ASM level, but the database (control file) knows nothing about them. You want to clean up the debris from previous activities, but you need to be confident that your programmatic or manual verification of the file being orphaned is correct… No one wants to delete a file that should not be removed!

Before going into the steps I use for this I need to stress that the approach relies on an assumption:

ASM directory “+<DISKGROUP_NAME>/<DB_UNIQUE_NAME>” contains no database files that should be retained and belong to a database with a different DB_UNIQUE_NAME.

… and the example code snippets are based on the following further assumptions/statements:

  1. oratab has the instance name listed – Last time I checked DBCA does not take care of this for you in a RAC environment, but many companies add instance names. If you don’t then you’ll need to modify the approach to append the appropriate number to ORACLE_SID before the SQL*Plus & RMAN commands. One way to achieve this is covered in RAC/Exadata Shell Aliases, but to keep this simple I’ll leave that to those that need it. Note that you would also need to modify the approach used to run for multiple database that is given at the end of the post.
  2. Parts of the code snippets assume you’re not running more than an eight node RAC database – Actually, the approach would work on a RAC database with more than 8 nodes as long as you didn’t pick a node with instance number(s) greater than 8 to run the script on.
  3. It is not perfect/foolproof!

If you use Oracle Managed Files (OMF) for all databases that use the ASM storage then you’re very likely (but not guaranteed) to be safe to use this approach, however, I can imagine a number of ways to end up in a position where this is not going to be the case. For example, if a database has been renamed, but the database files have not been moved within ASM to match the new name. This is something I’ve previously covered in Rename Database When Using ASM.

Basically, you have been warned and are responsible for making sure this approach is safe in your environment.

To protect you (dear Googler who just wants a quick fix for your immediate problem) from making a mistake you do not appreciate you could be about to make it seems appropriate to provide the steps rather than my complete script so that you have a slightly higher barrier to deleting things you really don’t want to delete.

OK, so that is enough of my caution.

Orphaned Files for a Single Database

Getting a list of the orphaned files for database is as simple as the following (assuming the environment is already source using oraenv for the instance):

rman log=orphaned_files_${ORACLE_SID%[1-8]}_DATA.log << EOF
connect target /
catalog start with '+DATA/${ORACLE_SID%[1-8]}';
EOF

A couple of comments about this snippet:

  1. The ${ORACLE_SID%[1-8]} strips the instance number off ORACLE_SID, which under most circumstances will give you the DB_NAME or DB_UNIQUE_NAME in a RAC environment, but this is not guaranteed to be the case as some people do odd things with naming standards.
  2. If you’re not using this in a RAC environment and you have instance names that end with a digit, which seems fairly common, then replace all the ${ORACLE_SID%[1-8]} strings with ${ORACLE_SID} to avoid chopping the end off their names.
  3. Diskgroup name is hard coded in this example, but it’s pretty trivial to come up with a way of obtaining the diskgroups and one approach is given later in this post.

Generating the File Deletion Script

Once the “orphaned_files_*” file has been created then we need to parse out the file names and turn them into ASMCMD rm commands:

awk -v db=${ORACLE_SID%[1-8]} '{if (($1" "$2 == "File Name:") && ($3 !~ /spfile/)) {print "asmcmd rm "$3}}' orphaned_files_${ORACLE_SID%[1-8]}_*.log > asmcmd_cleanup.sh

This will generate a file named asmcmd_cleanup.sh containing the commands to delete all the orphaned files that were found in +<DISKGROUP_NAME>/<DB_UNIQUE_NAME> for the database, excluding any file or path containing the string “spfile” (because many places create an alias for spfile). You may want to add further exclusions to this.

It would be very wise to verify the contents of asmcmd_cleanup.sh, but when you are happy that you have excluded what you want to exclude it just needs to be run as a user with appropriate privileges to execute asmcmd and with the appropriate environment set up. For example, if you have implemented GI and RDBMS role separation, e.g., oracle and grid users then you’ll need to run the script as grid.

So that’s the nuts and bolts of the approach. Hopefully you’ll find it useful. If you have a cleaner (or simply better) approach to identify the orphaned files then please let me know.

Finding the ASM Diskgroups for a Database

As mentioned earlier deriving the list of diskgroups can be achieved using:

diskgroups=$(sqlplus -s -L / as sysdba << EOF
set pages 0 feedback off
select listagg (name, ' ') within group (order by name) from v\$asm_diskgroup;
EOF
)

Generating File List for all Databases in /etc/oratab

… and if you have a number of databases you want to run this for in a single pass then you probably want to use something like the following in a script:

ORAENV_ASK=NO
for inst in $(awk -F":" '{if(($1 ~ /[1-8]$/) && ($1 !~ /+ASM/ )) {print $1}}' /etc/oratab); do
ORACLE_SID=${inst}
. oraenv
diskgroups=$(sqlplus -s -L / as sysdba << EOF
set pages 0 feedback off
select listagg (name, ' ') within group (order by name) from v\$asm_diskgroup;
EOF
)
for diskgroup in $diskgroups; do
rman log=orphaned_files_${inst%[1-8]}_${diskgroup}.log << EOF
connect target /
catalog start with '+${diskgroup}/${inst%[1-8]}';
EOF
done
done

You just need to combine the above with the awk command to generate the asmcmd rm commands and you’re got something that will run for all the databases on a given host and generate a script file that can be run from the GI/ASM environment (i.e., with asmcmd in your path) in order to clear up the files that the databases do not know about.

A final word of caution: Watch you don’t remove backupsets that have been kept for a reason, but the current control file doesn’t know about.

As ever, feedback most welcome, even if I’ve help you to delete something you didn’t want to remove and you’re mad with me 😦

Oracle 6 VM Ignoring “shutdown” from Host

This seems like a great candidate for a MEEK (maybe everyone else knows) post…

When I started creating Oracle Linux 6 guest VMs in my KVM base environment it wasn’t long before I noticed that I was not able to shutdown the VMs via virt-manager. This seemed a bit odd as it worked fine for my Oracle Linux 5 guests.

A consequence of libvirt, via virt-manager or otherwise, not being able to stop my guests was that I could no longer just issue a shutdown in my host machine when I’d done with it for the day in order to shutdown everything in one go… The solution to that soon became to issue the shutdown for each of my Oracle Linux 6 guests via ssh before stopping the host. It wasn’t pretty, but it worked.

Well, today, in the process of working on something unrelated, I ran into something that looked to be a solution:

acpid

It’s completely obvious now I think about it, but it had never occurred to me that it would not be installed and running by default on Oracle Linux 6.

I ran yum install acpid in one of my VMs and started the service via service acpid start. Hey presto, I’m now able to stop that guest VM via virt-manager.

Incredibly simple, but will make my life just a little bit easier.