Archives mensuelles : décembre 2012

Going to Exadata what’s change for a DBA PART1

There is a lot of things that can be disturbing for a DBA when coming from a « normal » database to Exadata.

The first one i have been discover and will talk about is : HCC Compression

This compression type is only available on Exadata and offer 4 new compression methods :

  • QUERY LOW
  • QUERY HIGH
  • ARCHIVE LOW
  • ARCHIVE HIGH

Those compression methods are quite different for a « normal » DBA since they do not use compress data present in a block, but use a different algorythm that works with column (see http://www.oracle.com/technetwork/middleware/bi-foundation/ehcc-twp-131254.pdf for more details)

This new compression method allow a better compression ratio than normal database. (10x for QUERY HIGH and 15x for ARCHIVE HIGH)

The use of those methods are not different from Oracle database and are enabled with the following command

ALTER TABLE xxx COMPRESS FOR QUERY HIGH;

Here are some things you have to know before starting to use HCC compression

  • Only direct inserts (with APPEND hint), parallel DML, create table as select and SQL*Loader in direct mode can use the compression. For example if you do a normal insert, the compression will not occur.
  • Be careful with updates, cause HCC does not « support » updates and compression. In fact when you do an update on a row that have been compressed in QUERY HIGH for example this row will be uncompressed (move out of the compression unit) then add in a new block that will use OLTP compression. The result will be a lower compression ratio on this table/partition/subpartititon
  • You have to know that the column COMPRESS_FOR of views <DBA|ALL|USER>_<TABLES|TAB_PARTITIONS|TAB_SUBPARTITIONS> does not reflect the reality. It does just reflect the configuration of the table/partition/subpartitions. It means that you can have QUERY HIGH displayed for a table but no row compressed in it. The only way to check if rows are compressed is to use the DBMS_COMPRESSION package.
  • Deletes will preserve the compression, BUT you have to know that the compression unit will not be release until it will contains rows. So it can result in a lot of unused space in the table if you do to many deletes

For myself, my own datas have allow me to get a score of 12x compression in QUERY HIGH and a score around 16x in ARCHIVE HIGH.

Backup Exadata databases on Netbackup with Infiniband

Recently i had to check the backup configuration of my Exadata. We relealized that the configuration could be changed to obtain better performance with Netbackup because were not using the Infiniband network.

The configuration is composed of 3 servers (those ip are completly wrong and are just here to help you to understand how it works) :

  • 1 Exadata with the following ip addresses
    Public : 1.1.1.1 (for name exadb01)
    Priv : 2.2.2.2 (for name exadb01-priv)
  • 1 Netbackup Media Server with the following ip addresses
    Public : 3.3.3.3 (for name media)
    Priv : 4.4.4.4 (for name media-priv)
  • 1 Netbackup Master Server
    Public : 5.5.5.5 (for name master)

The hole configuration is just to fake the ip of the Exadata server to the master server

The first thing to do is to force the Exadata and the Media server to communicate over the Infiny Band Network. To do that you just need to change both Exadata/Media server host name

On the Exadata you should add (so the Exadata will communicate on the Infiniband network to the media server)

4.4.4.4 media.domain media

On the Media Server you should add (to allow it to communicate with the Exadata on its Infiniband network card)

2.2.2.2 exadb01-priv.domain exadb01-priv

To finish the configuration you need to fake the master server to make it think it is communicating with Exadata on the private interface, so you need to add this in the host file of the master server

1.1.1.1 exadb01-priv.domain exadb01-priv

Now the configuration is complete from the network point of view. To finish it from the Netbackup configuration you will have to use the exadb01-priv name for the client name

At the end of this change i have been able to backup at 49 Mb/s/channel with 8 channels giving 1,3Tb/h for 8 channels. The backup was start from the node 1 with direct db connection.

My limitation is coming from the storage use for the VTL

Hopes this will help you

Starting to post again

This is it!
It has been so long i was thinking to re-open my blog.

With my new mission, i have been able to work on a new Oracle technology that has get me to think about many things new for me.

Those are just personnal reflexions that you may found interesting.