Monday, March 31, 2008

ORA-39082 error while import using data pump

When you use data pump import (impdp) doing database migration, sometime you will get ORA-39082 error, for example,

ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_MKT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_SALES"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."CHECK_CURRENT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."DELETE_PAST"
created with compilation warnings

After import, when issue

alter procedure UPDATE_MKT compile;
alter procedure DELETE_PAST compile;
etc.

They compiled successfully without any errors or warning.

The reason behind this is because data pump import create procedures before views, if your procedure have dependency on views then you will have the ORA-39082 compilation errors at import.

Even the problem can be easily fixed by recompile all procedures after import. This is a little inconvenience for DBA that need to do DB refresh regularly.

Or perhaps this is the way Oracle suggest us don't reference views in procedures :)

Also there are Bugs return similar error

Impdp Returns ORA-39082 When Importing Wrapped Procedures

Doc ID:
Note:460267.1

Tuesday, March 04, 2008

update gone wild

Currently our company is implementing a Supply Chain Application to replace our out-dated I2. Needless to say, I am the person who's going to setup and maintain the Oracle database for the project. Oracle version is 10.2.0.3. The consultants keep having some performance problem with one of the procedures they build. I spent time to look into it, and found out how easy it is to screw up Oracle database.

The first problem was runaway session, the procedure had some sort of dead loop and doing massive updates without commit, at one time the undo tablespace grew all the way up to 11G. After the session was killed, Oracle need to recover all these 1.4 million undo blocks. As you have expected, it's not going to be a fast process and you literally can't skip it in anyway. (unless drop and recreate database of course) My instance is recovering at ~150 blocks/sec rate. You can check the recovering progress by query this view v$fast_start_transactions. I felt it's kinda of slow, not sure if other people had similar experience can share.

The interesting thing is while SMON doing the recovery, the consultant is trying to access the database from the application, whenever the application touch the subject tables, SMON fire up whole bunch of parallel processes in the effort to speed up the rollback processes. It's called Parallel Rollback. However in this case, it's not helping the situation at all, on the contrary, with so many processes trying to recover the same transaction, they created quite some contention on undo blocks. Each of these processes plus SMON spend a lot of time waiting for each other, buffer busy waits events on Undo blocks sky rocketing. Parallel rollback mostly improve performance on parallel transaction recovery. In this case, better disable the parallel rollback by

alter system set fast_start_parallel_rollback = false;

The second problem is one of the update statement keep hung the session and do all sort of crazy things in database. The simple update query generated millions of buffer gets and huge number of db file scattered read and latch:cache buffers chains events. With help of some query,

select CHILD# "cCHILD",
ADDR "sADDR",
GETS "sGETS",
MISSES "sMISSES",
SLEEPS "sSLEEPS"
from v$latch_children where name = 'cache buffers chains'
order by 5, 1, 2, 3 desc

select /*+ RULE */ e.owner '.' e.segment_name
segment_name, e.extent_id extent#,
x.dbablk - e.block_id + 1 block#,
x.tch, l.child#from sys.v$latch_children l,
sys.x$bh x, sys.dba_extents e
where x.hladdr = 'ADDR' and
e.file_id = x.file# and
x.hladdr = l.addr and
x.dbablk between e.block_id and
e.block_id + e.blocks -1
order by x.tch desc ;

The culprit object quickly discovered, but why?
After a quick analyze on the table the problem went away.
I suspect this is some sort of Oracle bug. The table causing the problem only has 20k rows. It was truncated and inserted some data earlier in the procedure, however after truncate Oracle reset the statistics of this table and didn't update it after insert. When the update kick in, optimizer was using the wrong information and went into some dead loop situation. Currently I suggested the consultant do an analyze after each insert as work around, however I would like to find if more organic and permanent solution for this from Oracle.

Reference,

How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID:
Note:163424.1


LATCH: CACHE BUFFERS CHAINS
Doc ID:
Note:42152.1


Parallel Rollback may hang database, Parallel query servers get 100% cpu
Doc ID:
Note:144332.1