Wednesday, September 02, 2009

Stop Datapump Job

If you have a data pump job running and you want to stop it, there's a couple of ways to stop them.

First run a statement to find out the running jobs,


------------------------------ ------------------------------ ------------------------------

then you can either use DBMS_DATAPUMP.ATTACH and DBMS_DATAPUMP.STOP_JOB to stop them from SQL


impdp userid=system/password attach=SYS_IMPORT_FULL_03
Are you sure you wish to stop this job ([yes]/no): yes

Monday, August 31, 2009

Hint, Check patch history

One of the method to check DB patch history is query registry$history


------------------------------ ---------- ---------- ------------------------------ --------------------
12-SEP-08 PM CPU
30-MAY-09 PM CPU

Wednesday, August 26, 2009

OEM Grid Control Agent issue

This morning we keep getting OEM agent message from one of our production DB server. HP-UX

It sent this Agent unreachable alert and clear alert repeatedly.

Severity=Unreachable StartMessage=Agent is Unreachable (REASON = SSL handshake failed: SSLSessionNotFoundErr) but the host is UP.

Severity=Unreachable ClearMessage=Agent Unreachability is cleared. The current status of the target is UP.

When check on the hosts, we observed a number of processes taking high CPU usage and memory and running for a couple of hours.

11466 /oracle/xxx/agent10g/perl/bin/perl /oracle/xxx/agent10g/sysman/admin/scripts/ 29011 14180 /oracle/xxx/agent10g/perl/bin/perl /oracle/xxx/agent10g/sysman/admin/scripts/ 32100

It's Symptoms of BUG 5908032 described in metalink doc,
Doc ID:

The immediate solution is to stop/start agent, or kill these processes if agent can't be stop gracefully.
The long term Solution is to apply Patch 5908032

an update on this, the other DBA told me the agent on this server was not patched after DB was upgraded from to, the agent is still in this case.
Also, stop agent will not remove the hung process. manually killed them.

Tuesday, February 03, 2009

A gotcha of Data Pump export

Today while I am answering one of OTN forum question about consistency of data pump export. I found I was under false impression that Data Pump by default guarantee dump file data consistency among all the tables. Because the Oracle Data Pump document was stating under the section named "How Data Pump Export Parameters Map to Those of the Original Export Utility" ,
"A parameter comparable to CONSISTENT is not needed"

This statement is kinda misleading, it's easy to give you false impression that Data Pump will guarentee consistency so that CONSISTENT is not needed.
Oracle now revised the document to
"A parameter comparable to CONSISTENT is not needed. Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality."
which is a little better.

To get current SCN use.
select dbms_flashback.get_system_change_number from dual;

To make things worse some expdp has this header imbeded in their output message, which is even more misleading.

Export: Release - 64bit Production on Friday, 05 September, 2008 13:59:59 Copyright (c) 2003, 2005, Oracle. All rights reserved. ;;; Connected to: Oracle Database 10g Enterprise Edition Release - 64bit Production With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options

FLASHBACK automatically enabled to preserve database integrity. Starting

"SYSTEM"."SYS_EXPORT_SCHEMA_02": system/******** directory=flash dumpfile=usr001_1.dmp logfile=exp_usr001_1.log

So that you know that Oracle Data Pump, up until version, doesn't guarantee data consistency among tables in a dump file. It's only guarantee point-in-time consistency of the table being exported.

also reference

Expdp Message "FLASHBACK automatically enabled" Does Not Guarantee Export Consistency
Doc ID:

Thursday, January 22, 2009

Quick Optimizer STATISTICS transfer between schemas

Oracle DBMS_STATS provide a quick way to transfer Optimizer Statistics.
The basic step are:

  1. create stats table to hold exported stats data.
    EXEC DBMS_STATS.create_stat_table('SCHEMA1','ST_TABLE');
  2. export stats to stats table
    EXEC DBMS_STATS.export_schema_stats('SCHEMA1','ST_TABLE',NULL,'SCHEMA1');
  3. transfer the stats table ST_TABLE to destinate location using method of choice, like exp/imp, create tabel as select ..., sqlplus copy etc.
  4. import the stats into schema
    EXEC DBMS_STATS.import_schema_stats('SCHEMA1','ST_TABLE',NULL,'SCHEMA1');
  5. If the schema name is different, for example you need to import schema1's stats to schema2, then you need to update stats table column C5 to change the owner name from schema1 to schema2
    update ST_TABLE set C5='SCHEMA2';
    then do the import.

Actually, DBMS_STATS provided a way to transfer stats between schema. If schema name is not same. use statown

EXEC DBMS_STATS.import_schema_stats('SCHEMA2','ST_TABLE',NULL,'SCHEMA1', statown=>'SCHEMA1');

Tuesday, January 06, 2009

The importance of proper BACKUP!

Come across a article today. A blog site called been completely wiped out after few years of operation simply because ex-IT person deliberately overwritten all the data on SQL server. And guess what, they were only using RAID mirror drives as 'Backup'.

This might be one of the extreme cases, but it's certainly a wake up call to many companies that gave up traditional tape backup and rely solely on standby and replication technologies.

In the case of Oracle database, I know a famous financial website didn't backup their RAC production servers. They have setup multiple Data Guard standby servers and replicate data remotely to off site servers. They even setup two days delayed log apply mechanism to counter bad data contamination. But is that enough? Well it seems pretty well covered all potential hardware and system failures. In most events they can bring production servers back relatively quick without painful slow tape restore. Cool huh.

But they over looked one of the most common and a lot of time most deadly form of system failures -- Human errors either accidentally or maliciously
Whatif a developer accidentally introduced an application bug into system, updated some records and wasn't noticed until two days later? Of course you can say let's increased the delay log apply to 7 days. Hmm, whatif you didn't find the bug 8 days later? You can't indefinitely increase the log apply. Besides this particular website has millions of users doing thousands of online transactions every second. It's not hard to imagine the cost of saving all the transaction logs for many days.

Till now, tape backup is still the most cost effective massive long term backup method. A lot of modern technologies like flashback database, Data Guard, RAC, Replication and storage snapshot etc have been introduced in last few years to help ease DBA's burden of database recovery. But so far they can only cover the database failures in the matter of days, they will not completely replace tape backup anytime soon.