UKOUG Tech14 Slides: Testing Jumbo Frames for RAC

Just a quick post to link to the slides from my presentation at the UKOUG Tech14 conference. The slides do not appear to be visible on the Super Sunday site, hence this post. The presentation was called “Testing Jumbo Frames for RAC”, the emphasis as much on the testing as on Jumbo Frames.

Abstract:

A discussion on the usage of Jumbo Frames for the RAC interconnect. We’ll cover background information and how this can be tested at the Oracle level and in the Operating system. We’ll also discuss how testing initially failed and why it’s interesting.

This is a topic that enters the realm of network and Unix admin’s but the presentation is aimed at DBAs who want to know more and want to know how to use Operating System tools to investigate further.

Slides:

http://www.osumo.co.uk/presentations/Jumbo%20Frames-Tech14-Public.pdf

jumbo-comic2

The conference itself was another success with particular highlights for me being James Morle talking about SAN replication vs DataGuard, Ludovico Caldara talking about PDBs with MAA, Richard Foote because I was like a silly little fanboi and Bryn Llewellyn purely for the way he weaves the English language in to something beautiful on the ears. All of my programs will have “prologues” and “epilogues” from now on, “headers” and “footers” are so passe 🙂

Equal to the presentations was the social side of things too. I do enjoy hanging around in pubs and talking shop.

Advertisements

Sometimes we don’t need to pack data in, we need to spread it out

I’ve gotten away with doing a presentation called “Contentious Small Tables” (download here) for the UKOUG three times now so I think it’s time to retire it from duty and serialise it here.

The planned instalments are below, I’ll change the items to links as the posts are published. The methods discussed in these posts are not exhaustive but hopefully cover most of the useful or interesting options.

  1. Sometimes we don’t need to pack data in, we need to spread it out (this post)
  2. Spreading out data – old school methods
  3. Spreading out data – with minimize records per block (also old school)
  4. Spreading out data – some partitioning methods
  5. Spreading out data – some modern methods

This post is the first instalment.

Sometimes we don’t need to pack data in, we need to spread it out

As data sets continue to grow there is more and more focus on packing data as tightly as we can. For performance reasons there are cases where we DBAs & developers need to turn this on its head and try to spread data out. An example is a small table with frequently updated/locked rows. There is no TX blocking but instead contention on the buffers containing the rows. This contention can manifest itself as “buffer busy waits”, one of the “gc” variants of “buffer busy waits” such as “gc buffer busy acquire” or “gc buffer busy release” or maybe on “latch: cache buffers chains”.

You’ll probably come at a problem of this nature from a session perspective via OEM, ASH data, v$session_event or SQL*Trace data. Taking ASH data as an example you can see below that for my exaggerated test, global cache busy waits dominate the time taken and, key for this series of posts, the “P1” and “P2” columns contain a small number of values.

column event format a35
select * from (
 select NVL(event,'CPU') event
 , count(*) waits
 , p1, p2
 from gv$active_session_history
 where sample_time between 
       to_date(to_char(sysdate,'DD-MON-YYYY')||' &from_time.','DD-MON-YYYY HH24:MI')
   and to_date(to_char(sysdate,'DD-MON-YYYY')||' &to_time.','DD-MON-YYYY HH24:MI')
 and module = 'NJTEST'
 group by NVL(event,'CPU'), p1, p2
 order by 2 desc
) where rownum <= 5;

EVENT                   WAITS    P1      P2
---------------------- ------ ----- -------
gc buffer busy acquire   1012     5  239004
gc buffer busy release    755     5  239004
gc current block busy     373     5  239004
CPU                        65     0       0

For the wait events above “P1” and “P2” have the following meanings:

select distinct parameter1, parameter2
from v$event_name
where name in ('gc buffer busy acquire'
              ,'gc buffer busy release'
              ,'gc current block busy');

PARAMETER1     PARAMETER2
-------------- --------------
file#          block#

So above we have contention on a single buffer. From ASH data we can also get the “SQL_ID” or “CURRENT_OBJ#” in order to identify the problem table. Below we show that all rows in the test table are in a single block – 239004.

select dbms_rowid.rowid_block_number(rowid) blockno
, count(*)
from procstate
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO   COUNT(*)
---------- ----------
    239004         12

After discovering this information the first port of call is to look at the SQL statement, execution plan and supporting code but assuming the application is written as best it can be then perhaps it’s time to look at the segment and data organisation.

In order to help illustrate the point of this series here is a chart showing contention in ASH data from a crude test case. The huge grey stripe being “gc” busy waits and the thin red stripe being standard “buffer busy waits”.

Contention  11g normal heap table

The next post in this series will look at some “Old School” methods of reducing this contention.

UKOUG Database Server SIG (Leeds 2013)

On Thursday I attended the UKOUG Database Server SIG in Leeds. All slides have been uploaded to the UKOUG website.

https://www.ukoug.org/events/ukoug-database-server-sig-meeting-may-2013/

It’s the first SIG I’ve attended this year and after enjoying it so much I ought to try harder to get to other events. We had Oak Table presence from David Kurtz talking all things partitioning/compression and purging, two Oracle employees discussing first support and then ZFS/NetApp (I particularly enjoyed this one) and then Edgars Rudans on his evolution of the Exadata Minimal Patching process. All of these presentations are well worth downloading and checking out.

The last presentation of the day was me. I’ve never presented before and it took a big step out of my comfort zone to get up there but I’m so glad I did. I would recommend it to anyone currently spending time in the audience thinking “I wish I had the confidence to do that”. It’s nerve racking beforehand but exhilarating afterwards.

When blogging in the past I’ve liked how it makes you think a little bit harder before pressing the publish button. I think the best thing I got out of the whole presentation process was that it made me dig even deeper to make sure I’d done my homework.

After the SIG there was a good group headed out for Leeds Oracle Beers #3 which involved local beer, good burgers and Morris Dancing, all good fun.

UKOUG UNIX SIG Meeting (Sept 2010)

Today I received an automated request to provide feedback on the UKOUG UNIX SIG Meeting I attended last week. This reminded me that I meant to blog about it but forgot. So let’s rewind a week for a quick summary.

First up was David Phizacklea from Jumbo Contracting. The presentation was about Linux on Intel. To be honest I don’t remember too much about Linux but I do remember several enjoyable war stories and some good common sense advice on life in a DBA team. A good start.

Next up was Pete Finnigan and “The right way to secure Oracle” parts 1 & 2. To get more than 90 minutes of information from someone who knows their trade so well was a real treat. I particularly liked a comment that the “hackers” we are most at threat from are not traditional hackers but normal employees trying to do their job as well as they can, they don’t necessarily realise logging into the production database from Excel is the wrong thing to do.

Then we had a presentation on the IBM POWER7 processor from Christopher Hales of IBM. This was well delivered and had just enough pretty pictures to keep me interested. And how they get more than a billion “things” on a single chip I can not begin to imagine. The mind boggles.

Last but not least we had Jonathan Lewis talking about bind variables, NULLs and some coding advice and tricks. This was well presented and good information. The best bit was at the end when Jonathan fielded questions from the audience. A chance for him to show off his deep and varied knowledge of the database engine and a chance for us to get even more from the already well executed event.

Well done UKOUG.

And now I must go rethink that statistics gathering strategy!