The first two surprises after my first 11g installation was the new location of alert.log file and AMM changed yet again. I didn't run 11g beta program, so the two surprises could be old story for many others.
Just in case you wonder where's my alert.log files and what is memory_target parameter. Check following two metalink docs for full explanation.
Automatic Memory Management(AMM) on 11g
Doc ID:
Note:443746.1
Finding alert.log file in 11g
Doc ID:
Note:438148.1
This is my space as an Oracle DBA, loaded with tips, scripts and procedures to help answer most common asked DBA questions and/or unorthodox ideas.
Monday, December 17, 2007
Sunday, November 04, 2007
explain plan - a note for myself
Just a quick note for myself. I used to generate explain plan from sqlplus using autotrace, it's time to switch to use dbms_xplan package instead. Well it's never too late to do the right thing anyway.
SQL> explain plan for select empno,sal from emp;
Explained.
SQL> select * from table(dbms_xplan.display);
SQL> explain plan for select empno,sal from emp;
Explained.
SQL> select * from table(dbms_xplan.display);
Thursday, November 01, 2007
LOG Miner by Example
Come across a post in OTN forum today where DBMS Direct has a pretty good demonstration of how to use Logminer. Thought it's might be a good idea to post it here as future quick reference for myself or anyone need it.
First check if you have SUPPLEMENTAL Logging enabled,
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
If not,
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Choose the Dictionary Option, there're three options you can choose.
Tell Logminer to use current online catalog as dictionary,
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Or extract Dictionary to the Redo Log Files,
you set this option if you can't access sourcce database while you do log mining.
EXECUTE DBMS_LOGMNR_D.BUILD( -
OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
Or extract Dictionary to a Flat File,
this option is for backward compatibility and not recommended if you can use other two.
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
'/oracle/database/', -
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
Now find a list of recent redo logfiles you want to mine,
you could choose to let Logminer use control file automatically build the list of redo logfiles needed.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
STARTTIME => '01-Jan-2007 08:30:00', -
ENDTIME => '01-Jan-2007 08:45:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
Or manually pick which one you want to analysis
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > TRUNC(SYSDATE);
/oracle/edb/oraarch/edb/1_3703_608486264.dbf
/oracle/edb/oraarch/edb/1_3704_608486264.dbf
/oracle/edb/oraarch/edb/1_3705_608486264.dbf
/oracle/edb/oraarch/edb/1_3706_608486264.dbf
/oracle/edb/oraarch/edb/1_3707_608486264.dbf
/oracle/edb/oraarch/edb/1_3708_608486264.dbf
/oracle/edb/oraarch/edb/1_3710_608486264.dbf
/oracle/edb/oraarch/edb/1_3709_608486264.dbf
/oracle/edb/oraarch/edb/1_3711_608486264.dbf
/oracle/edb/oraarch/edb/1_3712_608486264.dbf
10 rows selected.
Add the logfiles for analysis.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3712_608486264.dbf',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3711_608486264.dbf');
Start LogMiner to populate the V$LOGMNR_CONTENTS view
EXECUTE DBMS_LOGMNR.START_LOGMNR();
Format the SQLplus output for better viewing,
SQL> set lines 200
SQL> set pages 0
SQL> col USERNAME format a10
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
And there we go,
1 SELECT username,
2 SQL_REDO, SQL_UNDO ,to_char(timestamp,'DD/MM/YYYY HH24:MI:SS') timestamp
3* FROM V$LOGMNR_CONTENTS where rownum< username="'TEST'">
SQL> /
USERNAME SQL_REDO SQL_UNDO
---------- ------------------------------ ------------------------------
TIMESTAMP
-------------------
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'4','.83555'); T_NUM" = '4' and "RESULT" = '.
83555' and ROWID = 'AAAOnaAAZA
AAECZAEV';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'5','.70222'); T_NUM" = '5' and "RESULT" = '.
70222' and ROWID = 'AAAOnaAAZA
AAECZAEW';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'6','.82502'); T_NUM" = '6' and "RESULT" = '.
82502' and ROWID = 'AAAOnaAAZA
AAECZAEX';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'7','1.4818'); T_NUM" = '7' and "RESULT" = '1
.4818' and ROWID = 'AAAOnaAAZA
AAECZAEY';
01/11/2007 09:23:59
Reference:
Oracle Document
Using LogMiner to Analyze Redo Log Files
Orginal Post in OTN
First check if you have SUPPLEMENTAL Logging enabled,
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
If not,
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Choose the Dictionary Option, there're three options you can choose.
Tell Logminer to use current online catalog as dictionary,
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Or extract Dictionary to the Redo Log Files,
you set this option if you can't access sourcce database while you do log mining.
EXECUTE DBMS_LOGMNR_D.BUILD( -
OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
Or extract Dictionary to a Flat File,
this option is for backward compatibility and not recommended if you can use other two.
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
'/oracle/database/', -
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
Now find a list of recent redo logfiles you want to mine,
you could choose to let Logminer use control file automatically build the list of redo logfiles needed.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
STARTTIME => '01-Jan-2007 08:30:00', -
ENDTIME => '01-Jan-2007 08:45:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
Or manually pick which one you want to analysis
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > TRUNC(SYSDATE);
/oracle/edb/oraarch/edb/1_3703_608486264.dbf
/oracle/edb/oraarch/edb/1_3704_608486264.dbf
/oracle/edb/oraarch/edb/1_3705_608486264.dbf
/oracle/edb/oraarch/edb/1_3706_608486264.dbf
/oracle/edb/oraarch/edb/1_3707_608486264.dbf
/oracle/edb/oraarch/edb/1_3708_608486264.dbf
/oracle/edb/oraarch/edb/1_3710_608486264.dbf
/oracle/edb/oraarch/edb/1_3709_608486264.dbf
/oracle/edb/oraarch/edb/1_3711_608486264.dbf
/oracle/edb/oraarch/edb/1_3712_608486264.dbf
10 rows selected.
Add the logfiles for analysis.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3712_608486264.dbf',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/edb/oraarch/edb/1_3711_608486264.dbf');
Start LogMiner to populate the V$LOGMNR_CONTENTS view
EXECUTE DBMS_LOGMNR.START_LOGMNR();
Format the SQLplus output for better viewing,
SQL> set lines 200
SQL> set pages 0
SQL> col USERNAME format a10
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
And there we go,
1 SELECT username,
2 SQL_REDO, SQL_UNDO ,to_char(timestamp,'DD/MM/YYYY HH24:MI:SS') timestamp
3* FROM V$LOGMNR_CONTENTS where rownum< username="'TEST'">
SQL> /
USERNAME SQL_REDO SQL_UNDO
---------- ------------------------------ ------------------------------
TIMESTAMP
-------------------
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'4','.83555'); T_NUM" = '4' and "RESULT" = '.
83555' and ROWID = 'AAAOnaAAZA
AAECZAEV';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'5','.70222'); T_NUM" = '5' and "RESULT" = '.
70222' and ROWID = 'AAAOnaAAZA
AAECZAEW';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'6','.82502'); T_NUM" = '6' and "RESULT" = '.
82502' and ROWID = 'AAAOnaAAZA
AAECZAEX';
01/11/2007 09:23:59
TEST insert into "TEST"."KI_TEST delete from "TEST"."KI_TEST
_RES7"("MASTER_ID","WAFER_ID", _RES7" where "MASTER_ID" = '18
"DIE_NUM","TEST_NUM","RESULT") 8199' and "WAFER_ID" = '15' an
values ('188199','15','0,144' d "DIE_NUM" = '0,144' and "TES
,'7','1.4818'); T_NUM" = '7' and "RESULT" = '1
.4818' and ROWID = 'AAAOnaAAZA
AAECZAEY';
01/11/2007 09:23:59
Reference:
Oracle Document
Using LogMiner to Analyze Redo Log Files
Orginal Post in OTN
Tuesday, October 09, 2007
LOGGING/NOLOGGING
One of the common misconception is if you set NOLOGGING on a table or index, then no future DML operations (insert, update, delete etc) on this object will be recorded in logfiles.
Actually the real meaning of NOLOGGING is whatever operations are performed on the object with the NOLOGGING option, will NOT be recorded in logfiles.
However, not all operations support NOLOGGING mode, the following is a list of operations that support NOLOGGING:
direct load (SQL*Loader)
direct-load
INSERT CREATE TABLE ... AS SELECT
CREATE INDEX
ALTER TABLE ... MOVE PARTITION
ALTER TABLE ... SPLIT PARTITION
ALTER INDEX ... SPLIT PARTITION
ALTER INDEX ... REBUILD
ALTER INDEX ... REBUILD PARTITION
INSERT, UPDATE, and DELETE on LOBs in NOCACHE NOLOGGING mode stored out of line
It also recommended to do a backup of subject object after NOLOGGING operation. When you do media recovery of the object using backup copy before NOLOGGING operation, the extent invalidation records mark a range of blocks as logically corrupt, because the redo data is not fully logged. The similar situation apply to Data Guard setup. That's also one of reason why Data Guard setup require you to set force logging.
Howard has been talking about the use of _disable_logging parameter to temporary disable redo logging while bulking loading. However this parameter should be used with extreme caution.
Howard's post about _disable_logging
Actually the real meaning of NOLOGGING is whatever operations are performed on the object with the NOLOGGING option, will NOT be recorded in logfiles.
However, not all operations support NOLOGGING mode, the following is a list of operations that support NOLOGGING:
direct load (SQL*Loader)
direct-load
INSERT CREATE TABLE ... AS SELECT
CREATE INDEX
ALTER TABLE ... MOVE PARTITION
ALTER TABLE ... SPLIT PARTITION
ALTER INDEX ... SPLIT PARTITION
ALTER INDEX ... REBUILD
ALTER INDEX ... REBUILD PARTITION
INSERT, UPDATE, and DELETE on LOBs in NOCACHE NOLOGGING mode stored out of line
It also recommended to do a backup of subject object after NOLOGGING operation. When you do media recovery of the object using backup copy before NOLOGGING operation, the extent invalidation records mark a range of blocks as logically corrupt, because the redo data is not fully logged. The similar situation apply to Data Guard setup. That's also one of reason why Data Guard setup require you to set force logging.
Howard has been talking about the use of _disable_logging parameter to temporary disable redo logging while bulking loading. However this parameter should be used with extreme caution.
Howard's post about _disable_logging
Friday, September 21, 2007
Some thing about Shared Pool
Here's a good presentation, explaining the internal structure of shared pool and how lock, pin and latch are handled.
http://www.perfvision.com/papers/unit6_shared_pool.ppt
http://www.perfvision.com/papers/unit6_shared_pool.ppt
Friday, July 06, 2007
Set SQL*Plus Prompt
Since most DBAs will use SQL*Plus more than any tools else, it will be convenience if you can set your SQL*Plus prompt to show your current login user and SID instead of this "SQL>". Especially if you have mulitple instances running under same Oracle installation.
To do this, you need to include following in your glogin.sql
set term off
define sql_prompt=idle
column user_sid new_value sql_prompt
select lower(user) '@' '&_CONNECT_IDENTIFIER' user_sid from dual;
set sqlprompt '&sql_prompt> '
set term on
When you login you will see this,
sqlplus scott/tiger
scott@dev
If you are on 10g, it becomes much easier by only this,
set sqlprompt '_user@&_connect_identifier>'
To do this, you need to include following in your glogin.sql
set term off
define sql_prompt=idle
column user_sid new_value sql_prompt
select lower(user) '@' '&_CONNECT_IDENTIFIER' user_sid from dual;
set sqlprompt '&sql_prompt> '
set term on
When you login you will see this,
sqlplus scott/tiger
scott@dev
If you are on 10g, it becomes much easier by only this,
set sqlprompt '_user@&_connect_identifier>'
Monday, June 25, 2007
Getting a job as a Junior DBA - series
Unfortunately, due to some unspecified events happened early 2008, Howard decide to withdraw his contribution to Oracle community. More info here
The interview series link is no longer valid. I wish I had cut/pasted it :(
One of the most common Q I saw new DBA ask in Oracle forum is what kind of questions employer will ask during a job interview? I tried to sum a few questions I have been asked during job interviews as a job candidate.
Howard Rogers has nice serie in his blog from interviewer's prospective.
Getting a job as a Junior DBA
Frankly if you ask me, I don't want to work for him :) However that doesn't change the fact it's a nice reading.
The interview series link is no longer valid. I wish I had cut/pasted it :(
One of the most common Q I saw new DBA ask in Oracle forum is what kind of questions employer will ask during a job interview? I tried to sum a few questions I have been asked during job interviews as a job candidate.
Howard Rogers has nice serie in his blog from interviewer's prospective.
Getting a job as a Junior DBA
Frankly if you ask me, I don't want to work for him :) However that doesn't change the fact it's a nice reading.
Heterogeneous Distributed Database Systems
How to maintain a regular data replication between other type of databases with Oracle is a pretty frequently asked question in DBA world.
Consider setup Heterogeneous Distributed Database Systems to solve the problem.
John Palinski has a good post about it here
Consider setup Heterogeneous Distributed Database Systems to solve the problem.
John Palinski has a good post about it here
Saturday, June 23, 2007
Shared Pool Structures
A useful link to a white page about shared pool
Understanding Shared Pool Memory Structures
Also to toubleshoot any ORA-4031 error, metalink has a good note,
Troubleshooting and Diagnosing ORA-4031 Error (previously FAQ: ORA-4031):
Note:396940.1
Understanding Shared Pool Memory Structures
Also to toubleshoot any ORA-4031 error, metalink has a good note,
Troubleshooting and Diagnosing ORA-4031 Error (previously FAQ: ORA-4031):
Note:396940.1
Thursday, June 21, 2007
Find out Rollback Segment with active transactions
Use this script to find out how many active transactions link with a rollback segment.
SELECT NAME, XACTS "ACTIVE TRANSACTIONS"
FROM V$ROLLNAME, V$ROLLSTAT
WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN
SELECT NAME, XACTS "ACTIVE TRANSACTIONS"
FROM V$ROLLNAME, V$ROLLSTAT
WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN
Secure your Password
You should run following script $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
to secure your Password via Default Profile after DB creation. Or expect to see a lot of Policy Violations in your EM dbconsole for the instance.
I don't know why Oracle didn't make these setting 'Default' in the first place.
You could modify the CREATE FUNCTION verify_function part to fit your need.
alter profile default
limit failed_login_attempts 3
password_grace_time 10
password_life_time 45
password_lock_time unlimited
password_reuse_max 180
password_reuse_time unlimited password_verify_function VERIFY_FUNCTION;
to secure your Password via Default Profile after DB creation. Or expect to see a lot of Policy Violations in your EM dbconsole for the instance.
I don't know why Oracle didn't make these setting 'Default' in the first place.
You could modify the CREATE FUNCTION verify_function part to fit your need.
alter profile default
limit failed_login_attempts 3
password_grace_time 10
password_life_time 45
password_lock_time unlimited
password_reuse_max 180
password_reuse_time unlimited password_verify_function VERIFY_FUNCTION;
Wednesday, June 13, 2007
Quick steps to fix OMS agent problem
Some time when communication between OMS agent and OMS server is interrupted. You have problem to sync up the connection. Common problem/error include,
getting
EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet..
when run
emctl upload
run
emctl status agent
you will get following lines in result,
Last attempted heartbeat to OMS : 2007-06-07 17:12:30
Last successful heartbeat to OMS : unknown
The quick way to clear the problem is going through following steps:
getting
EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet..
when run
emctl upload
run
emctl status agent
you will get following lines in result,
Last attempted heartbeat to OMS : 2007-06-07 17:12:30
Last successful heartbeat to OMS : unknown
The quick way to clear the problem is going through following steps:
- emctl stop agent
- delete all files in $AGENT_HOME/sysman/emd/upload and AGENT_HOME/sysman/emd/state
- emctl clearstate agent
- emctl secure agent
- emctl start agent
Subscribe to:
Posts (Atom)