Thursday, January 22, 2009

Quick Optimizer STATISTICS transfer between schemas

Oracle DBMS_STATS provide a quick way to transfer Optimizer Statistics.
The basic step are:



  1. create stats table to hold exported stats data.
    EXEC DBMS_STATS.create_stat_table('SCHEMA1','ST_TABLE');
  2. export stats to stats table
    EXEC DBMS_STATS.export_schema_stats('SCHEMA1','ST_TABLE',NULL,'SCHEMA1');
  3. transfer the stats table ST_TABLE to destinate location using method of choice, like exp/imp, create tabel as select ..., sqlplus copy etc.
  4. import the stats into schema
    EXEC DBMS_STATS.import_schema_stats('SCHEMA1','ST_TABLE',NULL,'SCHEMA1');
  5. If the schema name is different, for example you need to import schema1's stats to schema2, then you need to update stats table column C5 to change the owner name from schema1 to schema2
    update ST_TABLE set C5='SCHEMA2';
    then do the import.

Actually, DBMS_STATS provided a way to transfer stats between schema. If schema name is not same. use statown

EXEC DBMS_STATS.import_schema_stats('SCHEMA2','ST_TABLE',NULL,'SCHEMA1', statown=>'SCHEMA1');

Tuesday, January 06, 2009

The importance of proper BACKUP!

Come across a techcrunch.com article today. A blog site called journalspace.com been completely wiped out after few years of operation simply because ex-IT person deliberately overwritten all the data on SQL server. And guess what, they were only using RAID mirror drives as 'Backup'.

http://www.techcrunch.com/2009/01/03/journalspace-drama-all-data-lost-without-backup-company-deadpooled/

This might be one of the extreme cases, but it's certainly a wake up call to many companies that gave up traditional tape backup and rely solely on standby and replication technologies.

In the case of Oracle database, I know a famous financial website didn't backup their RAC production servers. They have setup multiple Data Guard standby servers and replicate data remotely to off site servers. They even setup two days delayed log apply mechanism to counter bad data contamination. But is that enough? Well it seems pretty well covered all potential hardware and system failures. In most events they can bring production servers back relatively quick without painful slow tape restore. Cool huh.

But they over looked one of the most common and a lot of time most deadly form of system failures -- Human errors either accidentally or maliciously
Whatif a developer accidentally introduced an application bug into system, updated some records and wasn't noticed until two days later? Of course you can say let's increased the delay log apply to 7 days. Hmm, whatif you didn't find the bug 8 days later? You can't indefinitely increase the log apply. Besides this particular website has millions of users doing thousands of online transactions every second. It's not hard to imagine the cost of saving all the transaction logs for many days.

Till now, tape backup is still the most cost effective massive long term backup method. A lot of modern technologies like flashback database, Data Guard, RAC, Replication and storage snapshot etc have been introduced in last few years to help ease DBA's burden of database recovery. But so far they can only cover the database failures in the matter of days, they will not completely replace tape backup anytime soon.