Archives pour la catégorie Exadata

Entries related to Oracle Exadata

New Exadata mode for DBMS_STATS.GATHER_SYSTEM_STATS

There is a new configuration in DBMS_STATS that can help Oracle to understand that it is working on an Exadata.

Most of the time the FULL TABLE SCAN is more performant than an INDEX SCAN on Exadata systems (i say most of the time, so i mean you have to test by yourself). The problem of this is with default system stats.
With the default system stats does not know that a FTS may be less expensive than accessing an index.

To help Oracle to know it is working on a EXADATA system there is a new GATHER_SYSTEM_STATS mode exclusively for EXADATA.

To enable this new mode you have to execute the following command

exec dbms_stats.gather_system_stats('EXADATA');

This new mode is available from the 11.2.0.2.18 or 11.2.0.3.8 (nore information on Metalink note Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1])

Problems with incremental stats + compression + extended stats

I have recently hit a bug (that seems to be fixed in the BP6 for Exadata)

But i thought it was interesting to post about this (it may help someone)

When you have a table that is compress, you need to be carefull with Extended statistics cause it had a virtual hidden column to manage those stats. Right now there is no problem, the problems start when you drop the extended stats and try to use incremental stats on the table.

When you drop the extended stats it does not remove the virtual hidden column (first problem) it just set the column as UNUSED because of the compression.

My issue cames from this and the use of the incremental stats which had a bug and does not know anything about UNUSED columns. So each time i was trying to gather stats on the table Oracle was gathering all the stats again and again because it was finding that this UNUSED column did not get any stats…..

Going to Exadata what’s change for a DBA PART1

There is a lot of things that can be disturbing for a DBA when coming from a « normal » database to Exadata.

The first one i have been discover and will talk about is : HCC Compression

This compression type is only available on Exadata and offer 4 new compression methods :

  • QUERY LOW
  • QUERY HIGH
  • ARCHIVE LOW
  • ARCHIVE HIGH

Those compression methods are quite different for a « normal » DBA since they do not use compress data present in a block, but use a different algorythm that works with column (see http://www.oracle.com/technetwork/middleware/bi-foundation/ehcc-twp-131254.pdf for more details)

This new compression method allow a better compression ratio than normal database. (10x for QUERY HIGH and 15x for ARCHIVE HIGH)

The use of those methods are not different from Oracle database and are enabled with the following command

ALTER TABLE xxx COMPRESS FOR QUERY HIGH;

Here are some things you have to know before starting to use HCC compression

  • Only direct inserts (with APPEND hint), parallel DML, create table as select and SQL*Loader in direct mode can use the compression. For example if you do a normal insert, the compression will not occur.
  • Be careful with updates, cause HCC does not « support » updates and compression. In fact when you do an update on a row that have been compressed in QUERY HIGH for example this row will be uncompressed (move out of the compression unit) then add in a new block that will use OLTP compression. The result will be a lower compression ratio on this table/partition/subpartititon
  • You have to know that the column COMPRESS_FOR of views <DBA|ALL|USER>_<TABLES|TAB_PARTITIONS|TAB_SUBPARTITIONS> does not reflect the reality. It does just reflect the configuration of the table/partition/subpartitions. It means that you can have QUERY HIGH displayed for a table but no row compressed in it. The only way to check if rows are compressed is to use the DBMS_COMPRESSION package.
  • Deletes will preserve the compression, BUT you have to know that the compression unit will not be release until it will contains rows. So it can result in a lot of unused space in the table if you do to many deletes

For myself, my own datas have allow me to get a score of 12x compression in QUERY HIGH and a score around 16x in ARCHIVE HIGH.