There is a new configuration in DBMS_STATS that can help Oracle to understand that it is working on an Exadata.
Most of the time the FULL TABLE SCAN is more performant than an INDEX SCAN on Exadata systems (i say most of the time, so i mean you have to test by yourself). The problem of this is with default system stats.
With the default system stats does not know that a FTS may be less expensive than accessing an index.
To help Oracle to know it is working on a EXADATA system there is a new GATHER_SYSTEM_STATS mode exclusively for EXADATA.
To enable this new mode you have to execute the following command
This new mode is available from the 188.8.131.52.18 or 184.108.40.206.8 (nore information on Metalink note Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1])
I have recently hit a bug (that seems to be fixed in the BP6 for Exadata)
But i thought it was interesting to post about this (it may help someone)
When you have a table that is compress, you need to be carefull with Extended statistics cause it had a virtual hidden column to manage those stats. Right now there is no problem, the problems start when you drop the extended stats and try to use incremental stats on the table.
When you drop the extended stats it does not remove the virtual hidden column (first problem) it just set the column as UNUSED because of the compression.
My issue cames from this and the use of the incremental stats which had a bug and does not know anything about UNUSED columns. So each time i was trying to gather stats on the table Oracle was gathering all the stats again and again because it was finding that this UNUSED column did not get any stats…..
There is a lot of things that can be disturbing for a DBA when coming from a « normal » database to Exadata.
The first one i have been discover and will talk about is : HCC Compression
This compression type is only available on Exadata and offer 4 new compression methods :
- QUERY LOW
- QUERY HIGH
- ARCHIVE LOW
- ARCHIVE HIGH
Those compression methods are quite different for a « normal » DBA since they do not use compress data present in a block, but use a different algorythm that works with column (see http://www.oracle.com/technetwork/middleware/bi-foundation/ehcc-twp-131254.pdf for more details)
This new compression method allow a better compression ratio than normal database. (10x for QUERY HIGH and 15x for ARCHIVE HIGH)
The use of those methods are not different from Oracle database and are enabled with the following command
ALTER TABLE xxx COMPRESS FOR QUERY HIGH;
Here are some things you have to know before starting to use HCC compression
- Only direct inserts (with APPEND hint), parallel DML, create table as select and SQL*Loader in direct mode can use the compression. For example if you do a normal insert, the compression will not occur.
- Be careful with updates, cause HCC does not « support » updates and compression. In fact when you do an update on a row that have been compressed in QUERY HIGH for example this row will be uncompressed (move out of the compression unit) then add in a new block that will use OLTP compression. The result will be a lower compression ratio on this table/partition/subpartititon
- You have to know that the column COMPRESS_FOR of views <DBA|ALL|USER>_<TABLES|TAB_PARTITIONS|TAB_SUBPARTITIONS> does not reflect the reality. It does just reflect the configuration of the table/partition/subpartitions. It means that you can have QUERY HIGH displayed for a table but no row compressed in it. The only way to check if rows are compressed is to use the DBMS_COMPRESSION package.
- Deletes will preserve the compression, BUT you have to know that the compression unit will not be release until it will contains rows. So it can result in a lot of unused space in the table if you do to many deletes
For myself, my own datas have allow me to get a score of 12x compression in QUERY HIGH and a score around 16x in ARCHIVE HIGH.
Recently i had to check the backup configuration of my Exadata. We relealized that the configuration could be changed to obtain better performance with Netbackup because were not using the Infiniband network.
The configuration is composed of 3 servers (those ip are completly wrong and are just here to help you to understand how it works) :
- 1 Exadata with the following ip addresses
Public : 220.127.116.11 (for name exadb01)
Priv : 18.104.22.168 (for name exadb01-priv)
- 1 Netbackup Media Server with the following ip addresses
Public : 22.214.171.124 (for name media)
Priv : 126.96.36.199 (for name media-priv)
- 1 Netbackup Master Server
Public : 188.8.131.52 (for name master)
The hole configuration is just to fake the ip of the Exadata server to the master server
The first thing to do is to force the Exadata and the Media server to communicate over the Infiny Band Network. To do that you just need to change both Exadata/Media server host name
On the Exadata you should add (so the Exadata will communicate on the Infiniband network to the media server)
184.108.40.206 media.domain media
On the Media Server you should add (to allow it to communicate with the Exadata on its Infiniband network card)
220.127.116.11 exadb01-priv.domain exadb01-priv
To finish the configuration you need to fake the master server to make it think it is communicating with Exadata on the private interface, so you need to add this in the host file of the master server
18.104.22.168 exadb01-priv.domain exadb01-priv
Now the configuration is complete from the network point of view. To finish it from the Netbackup configuration you will have to use the exadb01-priv name for the client name
At the end of this change i have been able to backup at 49 Mb/s/channel with 8 channels giving 1,3Tb/h for 8 channels. The backup was start from the node 1 with direct db connection.
My limitation is coming from the storage use for the VTL
Hopes this will help you