Archives du mot-clé HCC

Going to Exadata what’s change for a DBA PART1

There is a lot of things that can be disturbing for a DBA when coming from a « normal » database to Exadata.

The first one i have been discover and will talk about is : HCC Compression

This compression type is only available on Exadata and offer 4 new compression methods :

  • QUERY LOW
  • QUERY HIGH
  • ARCHIVE LOW
  • ARCHIVE HIGH

Those compression methods are quite different for a « normal » DBA since they do not use compress data present in a block, but use a different algorythm that works with column (see http://www.oracle.com/technetwork/middleware/bi-foundation/ehcc-twp-131254.pdf for more details)

This new compression method allow a better compression ratio than normal database. (10x for QUERY HIGH and 15x for ARCHIVE HIGH)

The use of those methods are not different from Oracle database and are enabled with the following command

ALTER TABLE xxx COMPRESS FOR QUERY HIGH;

Here are some things you have to know before starting to use HCC compression

  • Only direct inserts (with APPEND hint), parallel DML, create table as select and SQL*Loader in direct mode can use the compression. For example if you do a normal insert, the compression will not occur.
  • Be careful with updates, cause HCC does not « support » updates and compression. In fact when you do an update on a row that have been compressed in QUERY HIGH for example this row will be uncompressed (move out of the compression unit) then add in a new block that will use OLTP compression. The result will be a lower compression ratio on this table/partition/subpartititon
  • You have to know that the column COMPRESS_FOR of views <DBA|ALL|USER>_<TABLES|TAB_PARTITIONS|TAB_SUBPARTITIONS> does not reflect the reality. It does just reflect the configuration of the table/partition/subpartitions. It means that you can have QUERY HIGH displayed for a table but no row compressed in it. The only way to check if rows are compressed is to use the DBMS_COMPRESSION package.
  • Deletes will preserve the compression, BUT you have to know that the compression unit will not be release until it will contains rows. So it can result in a lot of unused space in the table if you do to many deletes

For myself, my own datas have allow me to get a score of 12x compression in QUERY HIGH and a score around 16x in ARCHIVE HIGH.