With the increasing popularity of multi-core processor nowadays, it's unclear for a lot of people including DBAs how Oracle calculate the required license for certain server configuration.
A quick guideline is, for one server one core has processor licensing factor of .50, a quad core CPU require 2 processor licenses.
However there's a catch here, this equation is not apply to more than one multicore servers, for example RAC setup. Two multicore servers installed and/or running the program on 8 cores will need 8 multiplied by a core processor licensing factor of .75 equals 6. Therefore instead of 4 processor licenses this require 6 processor licenses.
Also note, When licensing Oracle programs with Standard Edition One or Standard Edition in the product name, a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.
Please refer to following licensing document from Oracle for more detail.
Oracle Licensing
This is my space as an Oracle DBA, loaded with tips, scripts and procedures to help answer most common asked DBA questions and/or unorthodox ideas.
Wednesday, May 21, 2008
Tuesday, May 06, 2008
ORA-01466 and flashback query
Someone brought this up in one OTN forum post.
I did a quick search on google and found this,
http://www.adp-gmbh.ch/ora/err/ora_01466.html
It seems a viable explanation. However I really need some sort reference to verify the 5 minutes time frame. Why 5 minutes? Not 1 minutes for example.
Then I found metalink note,
Error ORA-01466 while executing a flashback query.
Doc ID:
Note:281510.1
It explains everything.
The reason is because smon_scn_time is updated every 5 minutes in 9i, please note, in 10g smon_scn_time is updated every 6 seconds which is much more reasonable and making it very unlikely to hit the error.
I did a quick search on google and found this,
http://www.adp-gmbh.ch/ora/err/ora_01466.html
It seems a viable explanation. However I really need some sort reference to verify the 5 minutes time frame. Why 5 minutes? Not 1 minutes for example.
Then I found metalink note,
Error ORA-01466 while executing a flashback query.
Doc ID:
Note:281510.1
It explains everything.
The reason is because smon_scn_time is updated every 5 minutes in 9i, please note, in 10g smon_scn_time is updated every 6 seconds which is much more reasonable and making it very unlikely to hit the error.
Friday, May 02, 2008
ORA-03115 unsupported network datatype
ORA-03115: unsupported network datatype or representation
This is an annoying error that comes and goes during one of your implementation project. The application is using an utility that connecting to database from Excel. The client is version 9.2.0.1 and database is 10.2.0.3
After a quick search on internet,
Metalink note
OCI Application Errors with ORA-3115
Doc ID: Note:460498.1
seems a hit. After I applied the workaround, the problem seems go away for now. The thing I don't like about this is
This is an annoying error that comes and goes during one of your implementation project. The application is using an utility that connecting to database from Excel. The client is version 9.2.0.1 and database is 10.2.0.3
After a quick search on internet,
Metalink note
OCI Application Errors with ORA-3115
Doc ID: Note:460498.1
seems a hit. After I applied the workaround, the problem seems go away for now. The thing I don't like about this is
- It applies not only between 9i and 10g, according to the notes, the problem exists between 10.2.0.3 and 11g as well.
- The workaround is set CURSOR_SHARING to EXACT, which is not preferred setting if you having a lot of customized SQL running on your database.
Looks like Oracle didn't have a patch specifically for this bug yet. If anyone knows otherwise please drop a line.
Monday, March 31, 2008
ORA-39082 error while import using data pump
When you use data pump import (impdp) doing database migration, sometime you will get ORA-39082 error, for example,
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_MKT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_SALES"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."CHECK_CURRENT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."DELETE_PAST"
created with compilation warnings
After import, when issue
alter procedure UPDATE_MKT compile;
alter procedure DELETE_PAST compile;
etc.
They compiled successfully without any errors or warning.
The reason behind this is because data pump import create procedures before views, if your procedure have dependency on views then you will have the ORA-39082 compilation errors at import.
Even the problem can be easily fixed by recompile all procedures after import. This is a little inconvenience for DBA that need to do DB refresh regularly.
Or perhaps this is the way Oracle suggest us don't reference views in procedures :)
Also there are Bugs return similar error
Impdp Returns ORA-39082 When Importing Wrapped Procedures
Doc ID:
Note:460267.1
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_MKT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."UPDATE_SALES"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."CHECK_CURRENT"
created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"DX"."DELETE_PAST"
created with compilation warnings
After import, when issue
alter procedure UPDATE_MKT compile;
alter procedure DELETE_PAST compile;
etc.
They compiled successfully without any errors or warning.
The reason behind this is because data pump import create procedures before views, if your procedure have dependency on views then you will have the ORA-39082 compilation errors at import.
Even the problem can be easily fixed by recompile all procedures after import. This is a little inconvenience for DBA that need to do DB refresh regularly.
Or perhaps this is the way Oracle suggest us don't reference views in procedures :)
Also there are Bugs return similar error
Impdp Returns ORA-39082 When Importing Wrapped Procedures
Doc ID:
Note:460267.1
Tuesday, March 04, 2008
update gone wild
Currently our company is implementing a Supply Chain Application to replace our out-dated I2. Needless to say, I am the person who's going to setup and maintain the Oracle database for the project. Oracle version is 10.2.0.3. The consultants keep having some performance problem with one of the procedures they build. I spent time to look into it, and found out how easy it is to screw up Oracle database.
The first problem was runaway session, the procedure had some sort of dead loop and doing massive updates without commit, at one time the undo tablespace grew all the way up to 11G. After the session was killed, Oracle need to recover all these 1.4 million undo blocks. As you have expected, it's not going to be a fast process and you literally can't skip it in anyway. (unless drop and recreate database of course) My instance is recovering at ~150 blocks/sec rate. You can check the recovering progress by query this view v$fast_start_transactions. I felt it's kinda of slow, not sure if other people had similar experience can share.
The interesting thing is while SMON doing the recovery, the consultant is trying to access the database from the application, whenever the application touch the subject tables, SMON fire up whole bunch of parallel processes in the effort to speed up the rollback processes. It's called Parallel Rollback. However in this case, it's not helping the situation at all, on the contrary, with so many processes trying to recover the same transaction, they created quite some contention on undo blocks. Each of these processes plus SMON spend a lot of time waiting for each other, buffer busy waits events on Undo blocks sky rocketing. Parallel rollback mostly improve performance on parallel transaction recovery. In this case, better disable the parallel rollback by
alter system set fast_start_parallel_rollback = false;
The second problem is one of the update statement keep hung the session and do all sort of crazy things in database. The simple update query generated millions of buffer gets and huge number of db file scattered read and latch:cache buffers chains events. With help of some query,
select CHILD# "cCHILD",
ADDR "sADDR",
GETS "sGETS",
MISSES "sMISSES",
SLEEPS "sSLEEPS"
from v$latch_children where name = 'cache buffers chains'
order by 5, 1, 2, 3 desc
select /*+ RULE */ e.owner '.' e.segment_name
segment_name, e.extent_id extent#,
x.dbablk - e.block_id + 1 block#,
x.tch, l.child#from sys.v$latch_children l,
sys.x$bh x, sys.dba_extents e
where x.hladdr = 'ADDR' and
e.file_id = x.file# and
x.hladdr = l.addr and
x.dbablk between e.block_id and
e.block_id + e.blocks -1
order by x.tch desc ;
The culprit object quickly discovered, but why?
After a quick analyze on the table the problem went away.
I suspect this is some sort of Oracle bug. The table causing the problem only has 20k rows. It was truncated and inserted some data earlier in the procedure, however after truncate Oracle reset the statistics of this table and didn't update it after insert. When the update kick in, optimizer was using the wrong information and went into some dead loop situation. Currently I suggested the consultant do an analyze after each insert as work around, however I would like to find if more organic and permanent solution for this from Oracle.
Reference,
How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID:
Note:163424.1
LATCH: CACHE BUFFERS CHAINS
Doc ID:
Note:42152.1
Parallel Rollback may hang database, Parallel query servers get 100% cpu
Doc ID:
Note:144332.1
The first problem was runaway session, the procedure had some sort of dead loop and doing massive updates without commit, at one time the undo tablespace grew all the way up to 11G. After the session was killed, Oracle need to recover all these 1.4 million undo blocks. As you have expected, it's not going to be a fast process and you literally can't skip it in anyway. (unless drop and recreate database of course) My instance is recovering at ~150 blocks/sec rate. You can check the recovering progress by query this view v$fast_start_transactions. I felt it's kinda of slow, not sure if other people had similar experience can share.
The interesting thing is while SMON doing the recovery, the consultant is trying to access the database from the application, whenever the application touch the subject tables, SMON fire up whole bunch of parallel processes in the effort to speed up the rollback processes. It's called Parallel Rollback. However in this case, it's not helping the situation at all, on the contrary, with so many processes trying to recover the same transaction, they created quite some contention on undo blocks. Each of these processes plus SMON spend a lot of time waiting for each other, buffer busy waits events on Undo blocks sky rocketing. Parallel rollback mostly improve performance on parallel transaction recovery. In this case, better disable the parallel rollback by
alter system set fast_start_parallel_rollback = false;
The second problem is one of the update statement keep hung the session and do all sort of crazy things in database. The simple update query generated millions of buffer gets and huge number of db file scattered read and latch:cache buffers chains events. With help of some query,
select CHILD# "cCHILD",
ADDR "sADDR",
GETS "sGETS",
MISSES "sMISSES",
SLEEPS "sSLEEPS"
from v$latch_children where name = 'cache buffers chains'
order by 5, 1, 2, 3 desc
select /*+ RULE */ e.owner '.' e.segment_name
segment_name, e.extent_id extent#,
x.dbablk - e.block_id + 1 block#,
x.tch, l.child#from sys.v$latch_children l,
sys.x$bh x, sys.dba_extents e
where x.hladdr = 'ADDR' and
e.file_id = x.file# and
x.hladdr = l.addr and
x.dbablk between e.block_id and
e.block_id + e.blocks -1
order by x.tch desc ;
The culprit object quickly discovered, but why?
After a quick analyze on the table the problem went away.
I suspect this is some sort of Oracle bug. The table causing the problem only has 20k rows. It was truncated and inserted some data earlier in the procedure, however after truncate Oracle reset the statistics of this table and didn't update it after insert. When the update kick in, optimizer was using the wrong information and went into some dead loop situation. Currently I suggested the consultant do an analyze after each insert as work around, however I would like to find if more organic and permanent solution for this from Oracle.
Reference,
How To Identify a Hot Block Within The Database Buffer Cache.
Doc ID:
Note:163424.1
LATCH: CACHE BUFFERS CHAINS
Doc ID:
Note:42152.1
Parallel Rollback may hang database, Parallel query servers get 100% cpu
Doc ID:
Note:144332.1
Tuesday, February 26, 2008
Bypass buffer cache for Full Table Scans
Paypal DBA Saibabu mentioned an interesting undocumented parameter to bypass buffer cache for full table scans in his blog http://sai-oracle.blogspot.com/
alter session set "_serial_direct_read" = true;
I found it's particular useful in OLTP environment where you need to occasionally run a FTS query against a large table.
alter session set "_serial_direct_read" = true;
I found it's particular useful in OLTP environment where you need to occasionally run a FTS query against a large table.
Thursday, February 21, 2008
Install Oracle 10gR2 on RHEL5 or OEL5
It comes to my attention that a lot of people still having problem installing Oracle 10gR2 on RedHat Enterprise Linux/Oracle Enterprise Linux 5.
Actually Oracle already posted a series of metalink notes covering all aspect of such issue. By following the procedures listed in these notes you should have a success installation.
These notes contains links to each other, you shouldn't have problem to find all of them once you got one.
Requirements For Installing Oracle10gR2 On RHEL/OEL 5 (x86_64)
Doc ID: Note:421308.1
Note 376183.1 - Defining a "default RPMs" installation of the RHEL OS
Note 419646.1 - Requirements For Installing Oracle 10gR2 On RHEL5 (x86)
Note 456634.1 - Installer Is Failing on Prereqs for Redhat-5 - RHEL5
Actually Oracle already posted a series of metalink notes covering all aspect of such issue. By following the procedures listed in these notes you should have a success installation.
These notes contains links to each other, you shouldn't have problem to find all of them once you got one.
Requirements For Installing Oracle10gR2 On RHEL/OEL 5 (x86_64)
Doc ID: Note:421308.1
Note 376183.1 - Defining a "default RPMs" installation of the RHEL OS
Note 419646.1 - Requirements For Installing Oracle 10gR2 On RHEL5 (x86)
Note 456634.1 - Installer Is Failing on Prereqs for Redhat-5 - RHEL5
Subscribe to:
Posts (Atom)