When you use data pump import (impdp) doing database migration, sometime you will get ORA-39082 error, for example,
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_MKT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_SALES"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."CHECK_CURRENT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."DELETE_PAST"
created with compilation warnings
After import, when issue
alter procedure UPDATE_MKT compile;
alter procedure DELETE_PAST compile;
etc.
They compiled successfully without any errors or warning.
The reason behind this is because data pump import create procedures before views, if your procedure have dependency on views then you will have the ORA-39082 compilation errors at import.
Even the problem can be easily fixed by recompile all procedures after import. This is a little inconvenience for DBA that need to do DB refresh regularly.
Or perhaps this is the way Oracle suggest us don't reference views in procedures :)
Also there are Bugs return similar error
Impdp Returns ORA-39082 When Importing Wrapped Procedures
Doc ID:
Note:460267.1
This is my space as an Oracle DBA, loaded with tips, scripts and procedures to help answer most common asked DBA questions and/or unorthodox ideas.
Monday, March 31, 2008
Tuesday, March 04, 2008
update gone wild
Currently our company is implementing a Supply Chain Application to replace our out-dated I2. Needless to say, I am the person who's going to setup and maintain the Oracle database for the project. Oracle version is 10.2.0.3. The consultants keep having some performance problem with one of the procedures they build. I spent time to look into it, and found out how easy it is to screw up Oracle database.
The first problem was runaway session, the procedure had some sort of dead loop and doing massive updates without commit, at one time the undo tablespace grew all the way up to 11G. After the session was killed, Oracle need to recover all these 1.4 million undo blocks. As you have expected, it's not going to be a fast process and you literally can't skip it in anyway. (unless drop and recreate database of course) My instance is recovering at ~150 blocks/sec rate. You can check the recovering progress by query this view v$fast_start_transactions. I felt it's kinda of slow, not sure if other people had similar experience can share.
The interesting thing is while SMON doing the recovery, the consultant is trying to access the database from the application, whenever the application touch the subject tables, SMON fire up whole bunch of parallel processes in the effort to speed up the rollback processes. It's called Parallel Rollback. However in this case, it's not helping the situation at all, on the contrary, with so many processes trying to recover the same transaction, they created quite some contention on undo blocks. Each of these processes plus SMON spend a lot of time waiting for each other, buffer busy waits events on Undo blocks sky rocketing. Parallel rollback mostly improve performance on parallel transaction recovery. In this case, better disable the parallel rollback by
alter system set fast_start_parallel_rollback = false;
The second problem is one of the update statement keep hung the session and do all sort of crazy things in database. The simple update query generated millions of buffer gets and huge number of db file scattered read and latch:cache buffers chains events. With help of some query,
select CHILD# "cCHILD",
ADDR "sADDR",
GETS "sGETS",
MISSES "sMISSES",
SLEEPS "sSLEEPS"
from v$latch_children where name = 'cache buffers chains'
order by 5, 1, 2, 3 desc
select /*+ RULE */ e.owner '.' e.segment_name
segment_name, e.extent_id extent#,
x.dbablk - e.block_id + 1 block#,
x.tch, l.child#from sys.v$latch_children l,
sys.x$bh x, sys.dba_extents e
where x.hladdr = 'ADDR' and
e.file_id = x.file# and
x.hladdr = l.addr and
x.dbablk between e.block_id and
e.block_id + e.blocks -1
order by x.tch desc ;
The culprit object quickly discovered, but why?
After a quick analyze on the table the problem went away.
I suspect this is some sort of Oracle bug. The table causing the problem only has 20k rows. It was truncated and inserted some data earlier in the procedure, however after truncate Oracle reset the statistics of this table and didn't update it after insert. When the update kick in, optimizer was using the wrong information and went into some dead loop situation. Currently I suggested the consultant do an analyze after each insert as work around, however I would like to find if more organic and permanent solution for this from Oracle.
Reference,
How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID:
Note:163424.1
LATCH: CACHE BUFFERS CHAINS
Doc ID:
Note:42152.1
Parallel Rollback may hang database, Parallel query servers get 100% cpu
Doc ID:
Note:144332.1
The first problem was runaway session, the procedure had some sort of dead loop and doing massive updates without commit, at one time the undo tablespace grew all the way up to 11G. After the session was killed, Oracle need to recover all these 1.4 million undo blocks. As you have expected, it's not going to be a fast process and you literally can't skip it in anyway. (unless drop and recreate database of course) My instance is recovering at ~150 blocks/sec rate. You can check the recovering progress by query this view v$fast_start_transactions. I felt it's kinda of slow, not sure if other people had similar experience can share.
The interesting thing is while SMON doing the recovery, the consultant is trying to access the database from the application, whenever the application touch the subject tables, SMON fire up whole bunch of parallel processes in the effort to speed up the rollback processes. It's called Parallel Rollback. However in this case, it's not helping the situation at all, on the contrary, with so many processes trying to recover the same transaction, they created quite some contention on undo blocks. Each of these processes plus SMON spend a lot of time waiting for each other, buffer busy waits events on Undo blocks sky rocketing. Parallel rollback mostly improve performance on parallel transaction recovery. In this case, better disable the parallel rollback by
alter system set fast_start_parallel_rollback = false;
The second problem is one of the update statement keep hung the session and do all sort of crazy things in database. The simple update query generated millions of buffer gets and huge number of db file scattered read and latch:cache buffers chains events. With help of some query,
select CHILD# "cCHILD",
ADDR "sADDR",
GETS "sGETS",
MISSES "sMISSES",
SLEEPS "sSLEEPS"
from v$latch_children where name = 'cache buffers chains'
order by 5, 1, 2, 3 desc
select /*+ RULE */ e.owner '.' e.segment_name
segment_name, e.extent_id extent#,
x.dbablk - e.block_id + 1 block#,
x.tch, l.child#from sys.v$latch_children l,
sys.x$bh x, sys.dba_extents e
where x.hladdr = 'ADDR' and
e.file_id = x.file# and
x.hladdr = l.addr and
x.dbablk between e.block_id and
e.block_id + e.blocks -1
order by x.tch desc ;
The culprit object quickly discovered, but why?
After a quick analyze on the table the problem went away.
I suspect this is some sort of Oracle bug. The table causing the problem only has 20k rows. It was truncated and inserted some data earlier in the procedure, however after truncate Oracle reset the statistics of this table and didn't update it after insert. When the update kick in, optimizer was using the wrong information and went into some dead loop situation. Currently I suggested the consultant do an analyze after each insert as work around, however I would like to find if more organic and permanent solution for this from Oracle.
Reference,
How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID:
Note:163424.1
LATCH: CACHE BUFFERS CHAINS
Doc ID:
Note:42152.1
Parallel Rollback may hang database, Parallel query servers get 100% cpu
Doc ID:
Note:144332.1
Tuesday, February 26, 2008
Bypass buffer cache for Full Table Scans
Paypal DBA Saibabu mentioned an interesting undocumented parameter to bypass buffer cache for full table scans in his blog http://sai-oracle.blogspot.com/
alter session set "_serial_direct_read" = true;
I found it's particular useful in OLTP environment where you need to occasionally run a FTS query against a large table.
alter session set "_serial_direct_read" = true;
I found it's particular useful in OLTP environment where you need to occasionally run a FTS query against a large table.
Thursday, February 21, 2008
Install Oracle 10gR2 on RHEL5 or OEL5
It comes to my attention that a lot of people still having problem installing Oracle 10gR2 on RedHat Enterprise Linux/Oracle Enterprise Linux 5.
Actually Oracle already posted a series of metalink notes covering all aspect of such issue. By following the procedures listed in these notes you should have a success installation.
These notes contains links to each other, you shouldn't have problem to find all of them once you got one.
Requirements For Installing Oracle10gR2 On RHEL/OEL 5 (x86_64)
Doc ID: Note:421308.1
Note 376183.1 - Defining a "default RPMs" installation of the RHEL OS
Note 419646.1 - Requirements For Installing Oracle 10gR2 On RHEL5 (x86)
Note 456634.1 - Installer Is Failing on Prereqs for Redhat-5 - RHEL5
Actually Oracle already posted a series of metalink notes covering all aspect of such issue. By following the procedures listed in these notes you should have a success installation.
These notes contains links to each other, you shouldn't have problem to find all of them once you got one.
Requirements For Installing Oracle10gR2 On RHEL/OEL 5 (x86_64)
Doc ID: Note:421308.1
Note 376183.1 - Defining a "default RPMs" installation of the RHEL OS
Note 419646.1 - Requirements For Installing Oracle 10gR2 On RHEL5 (x86)
Note 456634.1 - Installer Is Failing on Prereqs for Redhat-5 - RHEL5
Monday, December 17, 2007
11g surprises
The first two surprises after my first 11g installation was the new location of alert.log file and AMM changed yet again. I didn't run 11g beta program, so the two surprises could be old story for many others.
Just in case you wonder where's my alert.log files and what is memory_target parameter. Check following two metalink docs for full explanation.
Automatic Memory Management(AMM) on 11g
Doc ID:
Note:443746.1
Finding alert.log file in 11g
Doc ID:
Note:438148.1
Just in case you wonder where's my alert.log files and what is memory_target parameter. Check following two metalink docs for full explanation.
Automatic Memory Management(AMM) on 11g
Doc ID:
Note:443746.1
Finding alert.log file in 11g
Doc ID:
Note:438148.1
Sunday, November 04, 2007
explain plan - a note for myself
Just a quick note for myself. I used to generate explain plan from sqlplus using autotrace, it's time to switch to use dbms_xplan package instead. Well it's never too late to do the right thing anyway.
SQL> explain plan for select empno,sal from emp;
Explained.
SQL> select * from table(dbms_xplan.display);
SQL> explain plan for select empno,sal from emp;
Explained.
SQL> select * from table(dbms_xplan.display);
Thursday, November 01, 2007
LOG Miner by Example
Come across a post in OTN forum today where DBMS Direct has a pretty good demonstration of how to use Logminer. Thought it's might be a good idea to post it here as future quick reference for myself or anyone need it.
First check if you have SUPPLEMENTAL Logging enabled,
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
If not,
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Choose the Dictionary Option, there're three options you can choose.
Tell Logminer to use current online catalog as dictionary,
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Or extract Dictionary to the Redo Log Files,
you set this option if you can't access sourcce database while you do log mining.
EXECUTE DBMS_LOGMNR_D.BUILD( -
OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
Or extract Dictionary to a Flat File,
this option is for backward compatibility and not recommended if you can use other two.
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
'/oracle/database/', -
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
Now find a list of recent redo logfiles you want to mine,
you could choose to let Logminer use control file automatically build the list of redo logfiles needed.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
STARTTIME => '01-Jan-2007 08:30:00', -
ENDTIME => '01-Jan-2007 08:45:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
Or manually pick which one you want to analysis
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > TRUNC(SYSDATE);
/oracle/edb/oraarch/edb/1_3703_608486264.dbf
/oracle/edb/oraarch/edb/1_3704_608486264.dbf
/oracle/edb/oraarch/edb/1_3705_608486264.dbf
/oracle/edb/oraarch/edb/1_3706_608486264.dbf
/oracle/edb/oraarch/edb/1_3707_608486264.dbf
/oracle/edb/oraarch/edb/1_3708_608486264.dbf
/oracle/edb/oraarch/edb/1_3710_608486264.dbf
/oracle/edb/oraarch/edb/1_3709_608486264.dbf
/oracle/edb/oraarch/edb/1_3711_608486264.dbf
/oracle/edb/oraarch/edb/1_3712_608486264.dbf
10 rows selected.
Add the logfiles for analysis.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3712_608486264.dbf',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3711_608486264.dbf');
Start LogMiner to populate the V$LOGMNR_CONTENTS view
EXECUTE DBMS_LOGMNR.START_LOGMNR();
Format the SQLplus output for better viewing,
SQL> set lines 200
SQL> set pages 0
SQL> col USERNAME format a10
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
And there we go,
1 SELECT username,
2 SQL_REDO, SQL_UNDO ,to_char(timestamp,'DD/MM/YYYY HH24:MI:SS') timestamp
3* FROM V$LOGMNR_CONTENTS where rownum< username="'TEST'">
SQL> /
USERNAME SQL_REDO SQL_UNDO
---------- ------------------------------ ------------------------------
TIMESTAMP
-------------------
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'4','.83555'); T_NUM" = '4' and "RESULT" = '.
83555' and ROWID = 'AAAOnaAAZA
AAECZAEV';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'5','.70222'); T_NUM" = '5' and "RESULT" = '.
70222' and ROWID = 'AAAOnaAAZA
AAECZAEW';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'6','.82502'); T_NUM" = '6' and "RESULT" = '.
82502' and ROWID = 'AAAOnaAAZA
AAECZAEX';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'7','1.4818'); T_NUM" = '7' and "RESULT" = '1
.4818' and ROWID = 'AAAOnaAAZA
AAECZAEY';
01/11/2007 09:23:59
Reference:
Oracle Document
Using LogMiner to Analyze Redo Log Files
Orginal Post in OTN
First check if you have SUPPLEMENTAL Logging enabled,
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
If not,
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Choose the Dictionary Option, there're three options you can choose.
Tell Logminer to use current online catalog as dictionary,
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Or extract Dictionary to the Redo Log Files,
you set this option if you can't access sourcce database while you do log mining.
EXECUTE DBMS_LOGMNR_D.BUILD( -
OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
Or extract Dictionary to a Flat File,
this option is for backward compatibility and not recommended if you can use other two.
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
'/oracle/database/', -
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
Now find a list of recent redo logfiles you want to mine,
you could choose to let Logminer use control file automatically build the list of redo logfiles needed.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
STARTTIME => '01-Jan-2007 08:30:00', -
ENDTIME => '01-Jan-2007 08:45:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
Or manually pick which one you want to analysis
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > TRUNC(SYSDATE);
/oracle/edb/oraarch/edb/1_3703_608486264.dbf
/oracle/edb/oraarch/edb/1_3704_608486264.dbf
/oracle/edb/oraarch/edb/1_3705_608486264.dbf
/oracle/edb/oraarch/edb/1_3706_608486264.dbf
/oracle/edb/oraarch/edb/1_3707_608486264.dbf
/oracle/edb/oraarch/edb/1_3708_608486264.dbf
/oracle/edb/oraarch/edb/1_3710_608486264.dbf
/oracle/edb/oraarch/edb/1_3709_608486264.dbf
/oracle/edb/oraarch/edb/1_3711_608486264.dbf
/oracle/edb/oraarch/edb/1_3712_608486264.dbf
10 rows selected.
Add the logfiles for analysis.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3712_608486264.dbf',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3711_608486264.dbf');
Start LogMiner to populate the V$LOGMNR_CONTENTS view
EXECUTE DBMS_LOGMNR.START_LOGMNR();
Format the SQLplus output for better viewing,
SQL> set lines 200
SQL> set pages 0
SQL> col USERNAME format a10
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
And there we go,
1 SELECT username,
2 SQL_REDO, SQL_UNDO ,to_char(timestamp,'DD/MM/YYYY HH24:MI:SS') timestamp
3* FROM V$LOGMNR_CONTENTS where rownum< username="'TEST'">
SQL> /
USERNAME SQL_REDO SQL_UNDO
---------- ------------------------------ ------------------------------
TIMESTAMP
-------------------
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'4','.83555'); T_NUM" = '4' and "RESULT" = '.
83555' and ROWID = 'AAAOnaAAZA
AAECZAEV';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'5','.70222'); T_NUM" = '5' and "RESULT" = '.
70222' and ROWID = 'AAAOnaAAZA
AAECZAEW';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'6','.82502'); T_NUM" = '6' and "RESULT" = '.
82502' and ROWID = 'AAAOnaAAZA
AAECZAEX';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'7','1.4818'); T_NUM" = '7' and "RESULT" = '1
.4818' and ROWID = 'AAAOnaAAZA
AAECZAEY';
01/11/2007 09:23:59
Reference:
Oracle Document
Using LogMiner to Analyze Redo Log Files
Orginal Post in OTN
Subscribe to:
Posts (Atom)