Advanced Compression: Visualising Insert Overhead

There is a My Oracle Support note, “Master Note for OLTP Compression [ID 1223705.1]”, which talks about the overheads of “COMPRESS FOR ALL OPERATIONS”. It’s a great note and, for a very artificial test, quotes overheads of around 25% CPU, 100% redo and 100% undo when inserting data via conventional methods. The purpose of this post is to visualise when the extra resources are used.

My test uses a SQL*Plus session that repeatedly inserts a single row into a table with “COMPRESS FOR ALL OPERATIONS” enabled. After each insert I use the SQL*Plus HOST command to capture redo and undo statistics for my session. I struggled to measure CPU for each insert. The “CPU used by this session” statistic was not granular enough for my single row inserts and the CPU tracked by SQL Trace was being rounded to 0ms or 1ms for each insert. However the execute elapsed attribute from SQL Trace did give me good granularity. Obviously SQL Trace has it’s own overheads as I’ve talked about before but I don’t think they are important in this case as I’m just after a visualisation of when extra resources are used.

The shell and SQL scripts used to capture statistics are at the foot of this post.

My SQL*Plus control session looks a little like this:

sqlplus neil
create table compteststats
(	testid		varchar2(10)
,	snapid		number
,	statname	varchar2(64)
,	statval		number);

create table compobj
(	col1 varchar2(30)
,	col2 number
,	col3 varchar2(30)
,	col4 varchar2(30)
,	col5 number
,	col6 varchar2(30))
compress for all operations;

exec dbms_application_info.set_module('TESTSESS','TESTSESS');
exec dbms_monitor.session_trace_enable

--capture starting values
host ./inscompstat.sh TEST1

insert into compobj values (lpad('w',30,'w'),1,lpad('x',30,'x'),lpad('y',30,'y'),2,lpad('z',30,'z'));
host ./inscompstat.sh TEST1

insert into compobj values (lpad('w',30,'w'),1,lpad('x',30,'x'),lpad('y',30,'y'),2,lpad('z',30,'z'));
host ./inscompstat.sh TEST1

-- repeat above insert/host combination until the first block is full and we move to a second

exit

I was then able to report redo and undo statistics using the SQL below

select snapid row#
,	max(decode(statname,'redo size',delta,0)) redo_size
,	max(decode(statname,'undo change vector size',delta,0)) undo_size
,	max(decode(statname,'HSC OLTP positive compression',delta,0)) HSC_OLTP_positive_comp
,	max(decode(statname,'HSC OLTP Compressed Blocks',statval,0)) HSC_OLTP_Compressed_Blocks
from (
	select testid,snapid,statname, statval
	,statval-lag(statval) over (partition by testid,statname order by snapid) delta
	from compteststats
	where testid = 'TEST2'
	and snapid > 0
)
group by snapid
order by snapid;

And I captured the elapsed time for each insert from the SQL Trace file using the grep/awk combo below. I then combined these values with the redo/undo figures in a spreadsheet.

grep "^EXEC #139771649698992" orcl1_ora_6987.trc | awk -F[,=] '{print $4}'

The chart produced from the spreadsheet looks like this, you can click to enlarge but get the gist from the small version.

vis-advcomp_1

The number of inserts is plotted on the X axis and the Y axis is either the value from v$sesstat or elapsed micro seconds from the trace file. The values are not important. What I found interesting was being able to visualise the life of the block. You can see the spikes in redo and undo each time the block is (re)compressed and as the block gradually fills up the compressions become more frequent until we get a lull in the compressions as we start to fill a second data block. The last spike on the chart is first compression of the second block.

Excerpts from my data is below. You can see the “HSC OLTP positive compression” has a delta of 1 for each spike demonstrating that the spike is caused by block compression.

      ROW#  REDO_SIZE  UNDO_SIZE HSC_OLTP_POSITIVE_COMP HSC_OLTP_COMPRESSED_BLOCKS
---------- ---------- ---------- ---------------------- --------------------------
         1        604        112                      0                          0
         2        432         68                      0                          0
         3        432         68                      0                          0
         4        432         68                      0                          0
         5        432         68                      0                          0
         6        432         68                      0                          0
         7        432         68                      0                          0
         8        432         68                      0                          0
...
        51        432         68                      0                          0
        52        432         68                      0                          0
        53       9092       8320                      1                          1
        54        420         68                      0                          1
        55        420         68                      0                          1
...
        97        420         68                      0                          1
        98        420         68                      0                          1
        99       9152       8320                      1                          1
       100        420         68                      0                          1
       101        420         68                      0                          1
...
       613        432         68                      0                          1
       614        432         68                      0                          1
       615       9092       8320                      1                          2
       616        420         68                      0                          2
       617        420         68                      0                          2

As stated previously – here are my scripts.

inscompstat.sh

sqlplus -S neil/neil <<SQLINP
@inscompstat $1
exit
SQLINP

inscompstat.sql

set verify off
insert into compteststats
(testid,snapid,statname,statval)
select '&1', c.maxsnap, s.name, s.value
from (
        select n.name, s.value
        from v$session u, v$sesstat s, v$statname n
        where u.sid = s.sid
        and s.statistic# = n.statistic#
        and u.module = 'TESTSESS'
        and n.name in ( 'redo size'
                ,       'undo change vector size'
                ,       'HSC OLTP positive compression'
                ,       'HSC OLTP Compressed Blocks')
        ) s
,       (select nvl(max(snapid),-1)+1 maxsnap
        from compteststats
        where testid = '&1') c
;

Compression Advisory: DBMS_COMPRESSION

I’ve been doing a lot of testing with COMPRESS FOR ALL OPERATIONS (OLTP compression) recently and spent a bit of time with the Compression Advisor. This post documents a few items I noticed along the way.

I picked up instructions on how to run the Compression Advisor from My Oracle Support note “Compression Advisory in 11GR2: Using DBMS_COMPRESSION [ID 762974.1]”. The note advises me to create a new tablespace and informs me that two temporary segments are created.

Please create a tablespace before running compression advisor as it creates two temporary tables and will do significant amount of I/O traffic against them.

Below is a sneak preview of the temporary segments that will be created.

create table "ACME".DBMS_TABCOMP_TEMP_UNCMP tablespace "SCRATCH" nologging
 as select /*+ DYNAMIC_SAMPLING(0) FULL("ACME"."ACCS") */ *
 from "ACME"."ACCS"  sample block( 99) mytab

create table "ACME".DBMS_TABCOMP_TEMP_CMP organization heap 
 tablespace "SCRATCH" compress for all operations nologging
 as select /*+ DYNAMIC_SAMPLING(0) */ *
 from "ACME".DBMS_TABCOMP_TEMP_UNCMP mytab

DBMS_COMPRESSION runs using invoker rights so the privileges of the user I log on as are relevant.

select procedure_name, authid 
from dba_procedures 
where owner = 'SYS' 
and object_name = 'DBMS_COMPRESSION';

PROCEDURE_NAME                 AUTHID
------------------------------ ------------
INCREMENTAL_COMPRESS           CURRENT_USER
GET_COMPRESSION_TYPE           CURRENT_USER
GET_COMPRESSION_RATIO          CURRENT_USER
                               CURRENT_USER

Taking all of the above into account I’ve created a new SCRATCH tablespace for my temporary segments and will run the analysis logged in as myself and not the schema user who owns the segment I want to compress. As we at ORAganism are so security minded 😉 I am not permitted to have the standard “dba” role and have a cut down one with only the permissions I need.

create tablespace scratch datafile '+data' size 50m;

create role nj_dba;
create user nj identified by nj;
grant create session, create any table, drop any table, select any table to nj_dba;
grant execute on sys.dbms_monitor to nj_dba;
grant nj_dba to nj;

Also according to note “ORA-01536: space quota exceeded for tablespace along with DBMS_COMPRESSION.GET_COMPRESSION_RATIO [ID 970324.1]” I need to give the owner of the segment being analysed a quota on my scratch tablespace.

To get rid of the errors, the user specified as the second parameter in the DBMS_COMPRESSION.GET_COMPRESSION_RATIO need to have … some QUOTA over the scratch tablespace specified by the first parameter.

Therefore:

alter user acme quota unlimited on scratch;

I have created my script to call the advisor, dbms_comp.sql, as below.

set serveroutput on
set feedback on
set verify off

declare
	blkcnt_cmp BINARY_integer;
	blkcnt_uncmp BINARY_integer;
	row_cmp BINARY_integer;
	row_uncmp BINARY_integer;
	cmp_ratio number;
	comptype_str varchar2(60);
begin
	dbms_compression.get_compression_ratio(
		scratchtbsname => upper('&3.')
	,	ownname => upper('&1.')
	,	tabname => upper('&2.')
	,	partname => null
	,	comptype => dbms_compression.comp_for_oltp
	,	blkcnt_cmp => blkcnt_cmp
	,	blkcnt_uncmp => blkcnt_uncmp
	,	row_cmp => row_cmp
	,	row_uncmp => row_uncmp
	,	cmp_ratio => cmp_ratio
	,	comptype_str => comptype_str
	,	subset_numrows => &4.
	);
	DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
	DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
	DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
	DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
	--DBMS_OUTPUT.PUT_LINE('Compression type = ' ||comptype_str);
	DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,1)||' to 1');
	DBMS_OUTPUT.PUT_LINE('Compression % benefit = '||round((blkcnt_uncmp-blkcnt_cmp)/blkcnt_uncmp*100,1));
	--DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);
end;
/
set verify on

So we should be good to go:

conn nj/nj
@orcl> @dbms_comp acme accs scratch 200000
declare
*
ERROR at line 1:
ORA-20000: Compression Advisor tablespace "SCRATCH" does not exist
ORA-06512: at "SYS.PRVT_COMPRESSION", line 776
ORA-06512: at "SYS.DBMS_COMPRESSION", line 214
ORA-06512: at line 9

With SQL Trace enabled I see the following in the trace file.

PARSING IN CURSOR #139655103675152 len=64 dep=1 uid=37 oct=3 lid=37 tim=1357852093619559 hv=814796271 ad='67de3b40' sqlid='g6pggnss91mgg'
SELECT count(*) FROM user_tablespaces WHERE tablespace_name= :b1
END OF STMT
...
BINDS #139655103675152:
 Bind#0
  ...
  value="SCRATCH"

Even though the temporary segments are created in the same schema as the object being analysed my own account also needs a quota on the scratch tablespace. No segments are created in my schema but I need the quota in order to get past the check on “user_tablespaces”.

alter user nj quota unlimited on scratch;

Let’s try again:

@orcl> @dbms_comp acme accs scratch 200000
declare
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.PRVT_COMPRESSION", line 776
ORA-06512: at "SYS.DBMS_COMPRESSION", line 214
ORA-06512: at line 9

I couldn’t find anything about this in M.O.S or find any evidence from SQL Trace. However the power of trial and error tracked the missing privilege down to “ANALYZE ANY” (I could have granted OEM_MONITOR to provide this privilege but we’re all about the principle of least privilege today).

grant ANALYZE ANY to NJ_DBA;

Let’s try again… again:

@orcl> @dbms_comp acme accs scratch 200000
Block count compressed = 1612
Block count uncompressed = 1632
Row count per block compressed = 123
Row count per block uncompressed = 121
Compression ratio = 1 to 1
Compression % benefit = 1.2

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.82

Sadly my table is not going to benefit from compression but it’s a success nonetheless. So the full list of required privileges is:

grant CREATE SESSION to NJ_DBA;
grant ANALYZE ANY to NJ_DBA;
grant CREATE ANY TABLE to NJ_DBA;
grant DROP ANY TABLE to NJ_DBA;
grant SELECT ANY TABLE to NJ_DBA;

You also need a quota for your schema user and an unnecessary quota for the executing user.

If you use VPD then don’t forget to take that into account.

And one last point, the temporary tables are created in thje target schema and the names are not session/object specific so don’t try to run Compress Advisor calls in parallel:

@orcl> @dbms_comp acme accs scratch 200000
declare
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.PRVT_COMPRESSION", line 776
ORA-06512: at "SYS.DBMS_COMPRESSION", line 214
ORA-06512: at line 9