Wednesday, June 25, 2008

Database Upgrade

For 9i/10g

Shutdown database to do patch installation

Use runinstaller to upgrade the binary it to higher version

Ensure SHARED_POOL_SIZE and JAVA_POOL_SIZE are atleast 150M

SQL> SHOW PARAMETER SHARED_POOL_SIZE
SQL> SHOW PARAMETER JAVA_POOL_SIZE

If not set as below

SQL> ALTER SYSTEM SET SHARED_POOL_SIZE='150M' SCOPE=spfile;

SQL> ALTER SYSTEM SET JAVA_POOL_SIZE='150M' SCOPE=spfile;

Shut down database:

SQL> SHUTDOWN

Start upgrade: [10g]

SQL> STARTUP UPGRADE
SQL> SPOOL upgrade.log
SQL> @?/rdbms/admin/catupgrd.sql
SQL> SPOOL OFF


Start upgrade: [9i]

SQL> startup migrate
SQL> spool upgrade.log
SQL> @?/rdbms/admin/catpatch.sql
SQL> spool off


Review the upgrade.log file for any errors

Restart the database:

SQL> SHUTDOWN IMMEDIATE

SQL> STARTUP

Run the utlrp.sql script to recompile all invalid PL/SQL packages
SQL> @?/rdbms/admin/utlrp.sql

Start listener

$lsnrctl start LISTENER

Check database connectivity

Friday, June 20, 2008

SQL Optimizers (RBO & CBO)

Whenever you execute a SQL statement, a component of the
database known as the optimizer must decide how best
to access the data operated on by that statement. Oracle supports two
optimizers: the rule-base optimizer (which was the original), and the cost-based
optimizer.

To figure out the optimal execution path for a statement, the
optimizers consider the following:


  • The syntax you've specified for the statement
  • Any conditions that the data must satisfy (the WHERE
    clauses)
  • The database tables your statement will need to access
  • All possible indexes that can be used in retrieving
    data from the table
  • The Oracle RDBMS version
  • The current optimizer mode
  • SQL statement hints
  • All available object statistics (generated via the
    ANALYZE command)
  • The physical table location (distributed SQL)
  • INIT.ORA settings (parallel query, async I/O, etc.)

Oracle gives you a choice of two optimizing alternatives: the
predictable rule-based optimizer and the more intelligent cost-based optimizer.


Understanding the Rule-Based Optimizer


The rule-based optimizer (RBO) uses a predefined set of
precedence rules to figure out which path it will use to access the database.
The RDBMS kernel defaults to the rule-based optimizer under a number of
conditions, including:



  • OPTIMIZER_MODE = RULE is specified in your INIT.ORA
    file
  • OPTIMIZER_MODE = CHOOSE is specified in your INIT.ORA
    file, and no statistics exist for
    any table involved in the statement
  • An ALTER SESSION SET OPTIMIZER_MODE = RULE command has
    been issued
  • An ALTER SESSION SET OPTIMIZER_MODE = CHOOSE
    command has been issued, and no statistics exist for
    any
    table involved in the statement
  • The rule hint (e.g., SELECT /*+ RULE */. . .) has been
    used in the statement

Using the RBO, the optimizer chooses an execution plan based on the access paths available and the ranks of these access paths. Oracle's ranking of the access paths is heuristic. If there is more than one way to execute a SQL statement, then the RBO always uses the operation with the lower rank. Usually, operations of lower rank execute faster than those associated with constructs of higher rank.

Rule-based optimizer condition rankings

RBO Path 1: Single Row by Rowid
RBO Path 2: Single Row by Cluster Join
RBO Path 3: Single Row by Hash Cluster Key with Unique or Primary Key
RBO Path 4: Single Row by Unique or Primary Key
RBO Path 5: Clustered Join
RBO Path 6: Hash Cluster Key
RBO Path 7: Indexed Cluster Key
RBO Path 8: Composite Index
RBO Path 9: Single-Column Indexes
RBO Path 10: Bounded Range Search on Indexed Columns
RBO Path 11: Unbounded Range Search on Indexed Columns
RBO Path 12: Sort Merge Join
RBO Path 13: MAX or MIN of Indexed Column
RBO Path 14: ORDER BY on Indexed Column
RBO Path 15: Full Table Scan

http://www.oracle.com.cn/other/9ionlinedoc/server.920/a96533/rbo.htm


Understanding the Cost-Based Optimizer


The CBO determines which execution plan is most efficient by considering available access paths and by factoring in information based on statistics for the schema objects (tables or indexes) accessed by the SQL statement. The CBO also considers hints, which are optimization suggestions placed in a comment in the statement.

The CBO performs the following steps:
The optimizer generates a set of potential plans for the SQL statement based on available access paths and hints. The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes, and partitions accessed by the statement.
The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory.
Serial plans with higher costs take more time to execute than those with smaller costs. When using a parallel plan, however, resource use is not directly related to elapsed time.
The optimizer compares the costs of the plans and chooses the one with the lowest cost

CBO Statistics in the Data Dictionary

The statistics used by the CBO are stored in the data dictionary. You can collect exact or estimated statistics about physical storage characteristics and data distribution in these schema objects by using the DBMS_STATS package or the ANALYZE statement.
Oracle strongly recommends that you use the DBMS_STATS package rather than ANALYZE to collect optimizer statistics. That package lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. Further, the cost-based optimizer will eventually use only statistics that have been collected by DBMS_STATS. However, you must use the ANALYZE statement rather than DBMS_STATS for statistics collection not related to the cost-based optimizer, such as:
To use the VALIDATE or LIST CHAINED ROWS clauses To collect information on freelist blocks
To maintain the effectiveness of the CBO, you must have statistics that are representative of the data. For table columns that contain values with large variations in number of duplicates, called skewed data, you should collect histograms.
The resulting statistics provide the CBO with information about data uniqueness and distribution. Using this information, the CBO is able to compute plan costs with a high degree of accuracy. This enables the CBO to choose the best execution plan based on the least cost.

How the CBO Optimizes SQL Statements for Fast Response


The CBO can optimize a SQL statement for fast response when the parameter OPTIMIZER_MODE is set to FIRST_ROWS_n, where n is 1, 10, 100, or 1000, or FIRST_ROWS. A hint FIRST_ROWS(n), where n is any positive integer, or FIRST_ROWS can be used to optimize an individual SQL statement for fast response.
Fast-response optimization is suitable for online users, such as those using Oracle Forms or Web access. Typically, online users are interested in seeing the first few rows and seldom look at the entire query result, especially when the result size is large. For such users, it makes sense to optimize the query to produce the first few rows as quickly as possible, even if the time to produce the entire query result is not minimized.
With fast-response optimization, the CBO generates a plan with the lowest cost to produce the first row or the first few rows. The CBO employs two different fast-response optimizations, referred to here as the old and new methods. The old method is used with the FIRST_ROWS hint or parameter value. With the old method, the CBO uses a mixture of costs and rules to produce a plan. It is retained for backward compatibility reasons.
The new method is totally based on costs, and it is sensitive to the value of n. With small values of n, the CBO tends to generate plans that consist of nested loop joins with index lookups. With large values of n, the CBO tends to generate plans that consist of hash joins and full table scans.
The value of n should be chosen based on the online user requirement and depends specifically on how the result is displayed to the user. Generally, Oracle Forms users see the result one row at a time and they are typically interested in seeing the first few screens. Other online users see the result one group of rows at a time.
With the fast-response method, the CBO explores different plans and computes the cost to produce the first n rows for each. It picks the plan that produces the first n rows at lowest cost. Remember that with fast-response optimization, a plan that produces the first n rows at lowest cost might not be the optimal plan to produce the entire result. If the requirement is to obtain the entire result of a query, then fast-response optimization should not be used. Instead use the ALL_ROWS parameter value or hint.

SQL Tuning Tips

    TIP 1 : SQL cannot be shared within Oracle unless it is absolutely identical. Statements must have match exactly in case, white space and underlying schema objects to be shared within Oracle's memory. Oracle avoids the parsing step for each subsequent use of an identical statement.

    sql> SELECT NAME FROM S_CUSTOMER WHERE ID = 212; statement to match
    sql> SELECT NAME FROM s_customer WHERE ID = 212; lower case
    sql> SELECT NAME FROM S_CUSTOMER WHERE ID=212; white space
    sql> SELECT NAME
    FROM S_CUSTOMER
    WHERE ID=212;
    white space

    • Use SQL standards within an application. Rules like the following are easy to implement and will allow more sharing within Oracle's memory.

        - Using a single case for all SQL verbs
        - Beginning all SQL verbs on a new line
        - Right or left aligning verbs within the initial SQL verb
        - Separating all words with a single space


    • Use bind variables. The values of bind variables do not need to be the same for two statements to be considered identical. Bind variables are not substituted until a statement has been successfully parsed.

    Sharable SQL SELECT * FROM emp WHERE emp_no = :B1; Bind value: 123
    SELECT * FROM emp WHERE emp_no = :B1; Bind value: 987
    Non-sharable SQL SELECT * FROM emp WHERE emp_no = 123;
    SELECT * FROM emp WHERE emp_no = 987;


    • Use a standard approach to table aliases. If two identical SQL statements vary because an identical table has two different aliases, then the SQL is different and will not be shared.

    • Use table aliases and prefix all column names by their aliases when more than one table is involved in a query. This reduces parse time AND prevents future syntax errors if someone adds a column to one of the tables with the same name as a column in another table. (ORA-00918: COLUMN AMBIGUOUSLY DEFINED)

    TIP 2: Beware of WHERE clauses which do not use indexes at all. Even if there is an index over a column that is referenced by a WHERE clause included in this section, Oracle will ignore the index. All of these WHERE clauses can be re-written to use an index while returning the same values. In other words, don't perform operations on database objects referenced in the WHERE clause.

    Do Not Use Use
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE SUBSTR(account_name,1,7) = 'CAPITAL';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name LIKE 'CAPITAL%';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name = NVL ( :acc_name, account_name);
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name LIKE NVL ( :acc_name, '%');
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE TRUNC (trans_date) = TRUNC (SYSDATE);
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE trans_date
    BETWEEN TRUNC (SYSDATE) AND TRUNC (SYSDATE) + .99999;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name account_type = 'AMEXA';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name = 'AMEX'
    AND account_type = 'A';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount + 3000 < 5000;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE
    amount < 2000;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    != 0;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    > 0;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    NOT = 0;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    > 0;

    TIP 3: Don't forget to tune views. Views are SELECT statements and can be tuned in just the same way as any other type of SELECT statement can be. All tuning applicable to any SQL statement are equally applicable to views.

    TIP 4: Avoid including a HAVING clause in SELECT statements. The HAVING clause filters selected rows only after all rows have been fetched. Using a WHERE clause helps reduce overheads in sorting, summing, etc. HAVING clauses should only be used when columns with summary operations applied to them are restricted by the clause.

    Do Not Use Use
    SELECT region, AVG (loc_size)
    FROM location
    GROUP BY region
    HAVING region != 'SYDNEY'
    AND region != 'PERTH';
    SELECT region, AVG (loc_size)
    FROM location
    WHERE region != 'SYDNEY'
    AND region != 'PERTH';
    GROUP BY region;

    TIP 5: Minimize the number of table lookups (subquery blocks) in queries, particularly if your statements include subquery SELECTs or multicolumn UPDATEs.

    Separate
    Subqueries
    SELECT emp_name
    FROM emp
    WHERE emp_cat = (SELECT MAX (category)
    FROM emp_categories)
    AND emp_range = (SELECT MAX (sal_range)
    FROM emp_categories)
    AND emp_dept = 0020;
    Combined
    Subqueries
    SELECT emp_name
    FROM emp
    WHERE (emp_cat, sal_range)
    = (SELECT MAX (category), MAX (sal_range)
    FROM emp_categories)
    AND emp_dept = 0020;

    TIP 6: Consider the alternatives EXISTS, IN and table joins when doing multiple table joins. None of these are consistently faster; it depends on your data. If there is a poor performer here, it's likely the IN clause.

    (Note, this query returns the employee names from each department in department category 'A'.)

    SELECT emp_name
    FROM emp E
    WHERE EXISTS ( SELECT 'X'
    FROM dept
    WHERE dept_no = E.dept_no
    AND dept_cat = 'A');
    SELECT emp_name
    FROM emp E
    WHERE dept_no IN ( SELECT dept_no
    FROM dept
    WHERE dept_no = E.dept_no
    AND dept_cat = 'A');
    SELECT emp_name
    FROM dept D, emp E
    WHERE E.dept_no = D.dept_no
    AND D.dept_cat = 'A';

    TIP 7: Avoid joins that require the DISTINCT qualifier on the SELECT list in queries which are used to determine information at the owner end of a one-to-many relationship. The DISTINCT operator causes Oracle to fetch all rows satisfying the table join and then sort and filter out duplicate values. EXISTS is a faster alternative, because the Oracle optimizer realizes when the subquery has been satisfied once, there is no need to proceed further and the next matching row can be fetched.

    (Note: This query returns all department numbers and names which have at least one employee.)

    Do Not Use Use
    SELECT DISTINCT dept_no, dept_name
    FROM dept D,
    emp E
    WHERE D.dept_no = E.dept_no;
    SELECT dept_no, dept_name
    FROM dept D
    WHERE EXISTS (
    SELECT 'X'
    FROM emp E
    WHERE E.dept_no = D.dept_no);

    TIP 8: Consider whether a UNION ALL will suffice in place of a UNION. The UNION clause forces all rows returned by each portion of the UNION to be sorted and merged and duplicates to be filtered before the first row is returned. A UNION ALL simply returns all rows including duplicates and does not have to perform any sort, merge or filter. If your tables are mutually exclusive (include no duplicate records), or you don't care if duplicates are returned, the UNION ALL is much more efficient.

    UNION UNION ALL
    SELECT acct_num, balance_amt
    FROM debit_transactions
    WHERE tran_date = '31-DEC-95'
    UNION
    SELECT acct_num, balance_amt
    FROM credit_transactions
    WHERE tran_date = '31-DEC-95';
    SELECT acct_num, balance_amt
    FROM debit_transactions
    WHERE tran_date = '31-DEC-95'
    UNION ALL
    SELECT acct_num, balance_amt
    FROM credit_transactions
    WHERE tran_date = '31-DEC-95';

    TIP 9: Consider using DECODE to avoid having to scan the same rows repetitively or join the same table repetitively. Note, DECODE is not necessarily faster as it depends on your data and the complexity of the resulting query. Also, using DECODE requires you to change your code when new values are allowed in the field.

    SELECT COUNT(*)
    FROM emp
    WHERE status = 'Y'
    AND emp_name LIKE 'SMITH%';
    ----------
    SELECT COUNT(*)
    FROM emp
    WHERE status = 'N'
    AND emp_name LIKE 'SMITH%';
    SELECT COUNT(DECODE(status, 'Y', 'X', NULL)) Y_count,
    COUNT(DECODE(status, 'N', 'X', NULL)) N_count
    FROM emp
    WHERE emp_name LIKE 'SMITH%';

    TIP 10: Oracle automatically performs simple column type conversions (or casting) when it compares columns of different types. Depending on the type of conversion, indexes may not be used. Make sure you declare your program variables as the same type as your Oracle columns, if the type is supported in the programming language you are using.

    Datatype
    of
    field in
    where
    clause
    Your Query After Implicit Conversion Index
    Used?
    emp_no
    indexed
    numeric
    SELECT ...
    FROM emp
    WHERE emp_no = '123';
    SELECT ...
    FROM emp
    WHERE emp_no = TO_NUMBER('123');
    YES
    emp_type
    indexed
    varchar2
    SELECT ...
    FROM emp
    WHERE emp_type = 123;
    SELECT ...
    FROM emp
    WHERE TO_NUMBER (emp_type) = 123;
    NO!

Predicate Selectivity

Purpose
~~~~~~~

This article explains how the CBO determines the selectivity for various
predicates.

Audience
~~~~~~~~
This article is aimed at Application Developers, DBA's and support analysts
interested in understanding how the CBO determines how selective a
particular access path is.

Selectivity
~~~~~~~~~~~
Selectivity is a measure of the proportion of a row source retrieved by
application of a particular predicate or combination of predicates.

Within the Oracle kernel it is expressed as a value between 0 and 1.
The closer the value is to 0 the more selective the predicate is.
Selectivity is only used by the CBO.

Basic Selectivity formula:
~~~~~~~~~~~~~~~~~~~~~~~~~~

Number of records satisfying a condition
Selectivity = -----------------------------------------
Total Number of records

In the optimizer, selectivity is used to compare the usefulness of various
predicates in combination with base object costs.

Knowing the proportion of the total data set that a column predicate defines
is very helpful in defining actual access costs.

By default, column selectivity is based on the high and low values and the
number of values in the column with an assumption of even distribution of
data between these two points.

Histogram data can give better selectivity estimates for unevenly distributed

column data. There is more discussion regarding Histograms later.

Selectivity is also used to define the cardinality of a particular row source
once predicates have been applied. Cardinality is the expected number of rows

that will be retrieved from a row source. Cardinality is useful in determining

nested loop join and sort costs. Application of selectivity to the original
cardinality of the row source will produce the expected (computed) cardinality

for the row source.


Glossary of Terms:
~~~~~~~~~~~~~~~~~~

NDV Number of Distinct Values
Cardinality Number of rows
Selectivity Proportion of a dataset returned by a particular predicate(or
group of predicates)

In the following illustrations there are 2 tables (T1 & T2) with columns (c1)

Selectivities:
~~~~~~~~~~~~~~
Without histograms
~~~~~~~~~~~~~~~~~~
c1 = '4076' 1/NDV
c1 > '4076' 1 - (High - Value / High - Low)
c1 >= '4076' 1 - (High - Value / High - Low) + 1/NDV
c1 like '4076' 1/NDV

Join selectivity
~~~~~~~~~~~~~~~~

The selectivity of a join is defined as the selectivity of the most selective

join column adjusted by the proportion of not null values in each join column.


Sel = 1/max[NDV(t1.c1),NDV(t2.c2)] *
( (Card t1 - # t1.c1 NULLs) / Card t1) *
( (Card t2 - # t2.c2 NULLs) / Card t2)

Bind Variable selectivity
~~~~~~~~~~~~~~~~~~~~~~~~~

Bind variables present a special case because the optimizer has no idea what
the bind variable value is prior to query optimization. This does not present
a problem with equality predicates since a uniform distribution of data is
assumed and the selectivity is taken as 1/NDV for the column. However for
range predicates it presents a major issue because the optimizer does not
know where the range starts or stops. Because of this the optimizer has to
make some assumptions as follows:

c1 = :bind1 1/NDV
c1 > :bind1 Default of 5%
c1 >= :bind1 Default of 5%
c1 like :bind1 Default of 25%

For more information on bind variables see Note 70075.1

Selectivity With Histograms
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Histograms provide additional information about column selectivity for
columns whose distribution is non uniform. Histograms store information about
column data value ranges. Each range is stored in a single row and is often
called a 'bucket'. There are 2 different methods for storing histograms in
Oracle. If there are a small number of distinct
column values (i.e. less than the number of buckets), the column value
and the count of that value is stored. If not then a series of endpoints
are stored to enable more accurate selectivity to be determined.

The first method allows the accurate figures to be used. However with
inexact histograms the terms popular and non-popular value are introduced
and are used to help determine selectivity. A popular value is a value that
spans multiple endpoints whereas a non-popular value does not.
See Note 72539.1 for more information on histograms.
Exact histograms
~~~~~~~~~~~~~~~~
c1 = '4706' count of value '4076' / Total Number of Rows
c1 > value count of values > '4076' / Total Number of Rows

InExact Histograms
~~~~~~~~~~~~~~~~~~
col = pop value # popular buckets / # buckets
col = non pop (Density)
col > value # buckets > value / # buckets


Rules for combining selectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Let P1 and P2 be 2 distinct predicates of query Q

P1 AND P2
S(P1&P2) = S(P1) * S(P2)
P1 OR P2
S(P1|P2) = S(P1) + S(P2) -[S(P1) * S(P2)]

Index Selectivity for concatenated indexes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Starting with 10.2, when a concatenated index, with all its columns having
equality predicates, is used as an access path, the optimizer uses 1/NDK as
the selectivity (where NDK is the number of distinct keys in the index).

On 9.2.0.7 and 9.2.0.8 this may be enabled with event 38060 level 1.
On 10.1.0.4 and 10.1.0.5 this may be enabled with event 38059 level 1.
On 10.2 adjustments will be made to the selectivity to account for nulls
in the index keys. This also occurs on 10.1.0.5 (with event 38059.)


Join cardinality
~~~~~~~~~~~~~~~~

Card(Pj) = Card(T1) * Card(T2) * Sel(Pj)
 

Bind variables in queries

Purpose
~~~~~~~
This article is intended to provide more information on the usage of bind
variables in queries.

Scope & Application
~~~~~~~~~~~~~~~~~~~
This article is aimed at application designers/users who may not understand
some of the implications for them and the database in the usage of bind
variables.

Bind variables
~~~~~~~~~~~~~~

Bind variables are place holders for query input values. They are a pointer to
a memory location where data value(s) will be placed.

Note that the prescence of bind variables has no effect on queries that are
optimised using the RBO. They only affect CBO query optimization because the
CBO attempts to use column value information to determine the optimal access
path for the query.
When no values are supplied, the CBO may make a sub-optimal plan choice.

Advantages of bind variables:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o When a bind variable as opposed to a hardcoded value is placed in a query,
the query code does not have to change each time the query is run. This means
that the code does not need to be reparsed and can be shared between sessions
and you do not need to maintain a copy of the statement for each value used
in the query. The effect of this is to reduce the amount of space used in the

shared pool to store almost identical copies of sql statements.

NB sharing also depends on other factors e.g.
o identical objects and object owners must be referenced
o bind variables must have the same datatype
o etc.

Disadvantages of bind variables:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o When a SQL statement is optimized, the optimizer is unable to use the current

bind value. If it did then the plan chosen for that value may be excessively
poor for other values. Also the plan chosen would depend on which value was
supplied first. Because of this the optimizer must either choose the average
selectivity fo that column (the density) or use defaults. This may result in
the generation of a sub-optimal plan.

The CBO is unable to determine accurate selectivities for range predicate
containing bind variables. The CBO uses column value data to adjust
selectivities. If it does not have any data values to do this with
(such as if bind variables are used) then this is not possible and assumptions

have to be made.
For queries with range predicates using bind variables, we have no way of
calculating the selectivity, so we use a hardcoded default value of 5%
This is true irrespective of histograms as CBO does not know the value of
the bind variable.

Selectivity for bind variables with 'like' predicates defaults to 25%

Range Predicate Example:
~~~~~~~~~~~~~~~~~~~~~~~~
SELECT ename FROM emp WHERE empno > 9999;
SELECT ename FROM emp WHERE empno > :bind1;

Assuming the table has been analyzed, CBO knows the HIGH and LOW values for
empno and that the values are evenly distributed between these points.
For the first statement, CBO can determine the selectivity for the
where clause 'where empno >9999' - it uses the assumption that values
are evenly distributed to enable it to estimate the number of values between
the supplied value and the HIGH value.

For the second statement, it does not know what the value of :bind1 is,
so it is unable to use the same assumption and uses the default selectivity
of 5%.

It is possible to test the affect of using bind variables as opposed to literals
by setting up a variable in sqlplus. You can also assign a value to the
variable.
Setup details for a numeric bind variable called bindvar:

variable bindvar number;
begin
:bindvar:=10;
end;
/
SELECT * FROM emp WHERE deptno = :bindvar;

Bind variable selectivities:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See Note 68992.1

Advice
~~~~~~

o Use bind variables for queries which are executed many times
e.g. OLTP environments
o Use literals where queries are not repeatedly executed and where the query
is sensitive to column sectivities.
o If bind variables are being used then use hints to force the desired access
path.
o Balance the use of bind variables and shareable code with the need to
produce accurate plans.
 

Sunday, June 15, 2008

How to apply Critical Patch Update using Opatch

1. Download interim patch to /home/patchnumber (ftp updates.oracle.com with metalink user id and password)

2. Shutdown the databases and listener

3. Set env variables

setenv ORACLE_HOME /oracle/10.2.0
setenv LD_LIBRARY_PATH ${ORACLE_HOME}/lib
setenv TNS_ADMIN ${ORACLE_HOME}/network/admin
set filec
setenv PATH .:/usr/sbin:/usr/bin:$ORACLE_HOME/Opatch:$ORACLE_HOME/bin

4. Install patch

cd /home/4567866
opatch apply

5. Start the affected database and complete the post installation steps

cd $ORACLE_HOME/cpu/CPUOct2005
sqlplus "/as sysdba"
sql> startup
sql> @catcpu.sql
SQL> @?/rdbms/admin/utlrp.sql
6. Start listener

Patch Deinstallation Instructions
To de-install the patch, set your current directory to the directory where the patch is located and then run opatch by entering the following command:

cd 4567866
opatch rollback -id 4567866

Start up all database instances running out of the ORACLE_HOME being patched.For each database instance running out of the ORACLE_HOME being patched, connect to the database using Sqlplus as sysdba and run catcpu_rollback.sql as follows -

cd $ORACLE_HOME/cpu/CPUOct2005
sqlplus /nolog
SQL> connect / as sysdba

SQL> STARTUP
SQL> @catcpu_rollback.sql
SQL> QUIT

Inspect the logfile and utlrplog files for any errors.
If catcpu.sql reports any Invalid Objects, Oracle recommends that you compile the invalid objects using the following;


cd $ORACLE_HOME/rdbms/admin

sqlplus /nolog
SQL> connect / as sysdba
SQL> @utlrp.sql
SQL> QUIT

You can check for any invalid objects by executing following statement;
select OBJECT_NAME from DBA_OBJECTS where status = 'INVALID';


Functions within Opatch

If you type opatch -h you will get a helplist over commands that can be used in opatch.
Here are some of them :

opatch apply -- Used to apply a patch

opatch apply -force --Used to apply patch even if there is a conflict due to previous applied interim patch. This will deinstall the previous one and install this one.

opatch lsinventory --Used to list the inventory for a particular $ORACLE_HOME or display all installations that can be found.

Opatch version Used to provide the version of the opatch utility.

If applying -report to any command everything will be printed on screen without executing the actions
To every command there is a number of arguments that can be used. To see the arguments issue the command -help along with the command to display the options.
Ex. opatch apply -help



Oracle Partitioning

RANGE PARTITION

CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY RANGE (TIME_ID)
(
PARTITION SALES_1998 VALUES LESS THAN (TO_DATE('1999-01-01','YYYY-MM-DD')),
PARTITION SALES_1999 VALUES LESS THAN (TO_DATE('2000-01-01', 'YYYY-MM-DD')),

PARTITION SALES_2000 VALUES LESS THAN (TO_DATE('2001-07-01','YYYY-MM-DD')),
PARTITION SALES_DEFT VALUES LESS THAN (MAXVALUE)
);

LIST PARTITION

CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY LIST (PROD_ID)
(
PARTITION P1 VALUES (13, 14, 15, 16, 17, 18, 19, 20, 21),
PARTITION P2 VALUES (33, 34, 35, 36, 37, 38, 39, 40, 41),
PARTITION P3 VALUES (117,118, 119, 120, 121, 122, 123, 124),
PARTITION P4 VALUES (134, 135, 136, 137, 138, 139, 140),
PARTITION PD VALUES (DEFAULT)
);

HASH PARTITION

CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY HASH (PROD_ID)
(
PARTITION P1 TABLESPACE USERS,
PARTITION P2 TABLESPACE USERS,
PARTITION P3 TABLESPACE USERS,
PARTITION P4 TABLESPACE USERS
);


OR

CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY HASH (PROD_ID)
PARTITIONS 4 STORE IN (USERS, USERS, USERS, USERS);



COMPOSITE RANGE HASH PARTITIONING

CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY RANGE (TIME_ID)
SUBPARTITION BY HASH (PROD_ID)
SUBPARTITION TEMPLATE
(
SUBPARTITION P1 TABLESPACE USERS,
SUBPARTITION P2 TABLESPACE USERS,
SUBPARTITION P3 TABLESPACE USERS,
SUBPARTITION P4 TABLESPACE USERS
)
(
PARTITION SALES_1998 VALUES LESS THAN (TO_DATE('1999-01-01','YYYY-MM-DD')),
PARTITION SALES_1999 VALUES LESS THAN (TO_DATE('2000-01-01', 'YYYY-MM-DD')),
PARTITION SALES_2000 VALUES LESS THAN (TO_DATE('2001-07-01','YYYY-MM-DD')),
PARTITION SALES_DEFT VALUES LESS THAN (MAXVALUE)
);

COMPOSITE RANGE LIST PARTITIONING

CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY RANGE (TIME_ID)
SUBPARTITION BY LIST (PROD_ID)
SUBPARTITION TEMPLATE
(
SUBPARTITION P1 VALUES (13, 14, 15, 16, 17, 18, 19, 20, 21),
SUBPARTITION P2 VALUES (33, 34, 35, 36, 37, 38, 39, 40, 41),
SUBPARTITION P3 VALUES (117,118, 119, 120, 121, 122, 123, 124),
SUBPARTITION P4 VALUES (134, 135, 136, 137, 138, 139, 140),
SUBPARTITION PD VALUES (DEFAULT)
)
(
PARTITION SALES_1998 VALUES LESS THAN (TO_DATE('1999-01-01','YYYY-MM-DD')),
PARTITION SALES_1999 VALUES LESS THAN (TO_DATE('2000-01-01', 'YYYY-MM-DD')),
PARTITION SALES_2000 VALUES LESS THAN (TO_DATE('2001-07-01','YYYY-MM-DD')),
PARTITION SALES_DEFT VALUES LESS THAN (MAXVALUE)
);

INDEX ORGANIZED TABLES PARTITIONING

CREATE TABLE SCOTT.SALESIOT
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2),
CONSTRAINT PK_CONS_SALESIOT PRIMARY
KEY(PROD_ID,CUST_ID,CHANNEL_ID,PROMO_ID,TIME_ID)
)
ORGANIZATION INDEX
PARTITION BY LIST (PROD_ID)
(
PARTITION P1 VALUES (13, 14, 15, 16, 17, 18, 19, 20, 21),
PARTITION P2 VALUES (33, 34, 35, 36, 37, 38, 39, 40, 41),
PARTITION P3 VALUES (117,118, 119, 120, 121, 122, 123, 124),
PARTITION P4 VALUES (134, 135, 136, 137, 138, 139, 140),
PARTITION PD VALUES (DEFAULT)
);
LOCAL INDEXES
CREATE INDEX SALESIDX_LOCAL ON SALESTAB(TIME_ID)
LOCAL
(
PARTITION SALES_1998_IDX,
PARTITION SALES_1999_IDX,
PARTITION SALES_2000_IDX,
PARTITION SALES_DEFT_IDX
);


GLOBAL PARTITIONED INDEXES

CREATE INDEX SALESIDX_GLOBAL ON SALESTAB(TIME_ID)
GLOBAL PARTITION BY RANGE(TIME_ID)
(
PARTITION SALES_LT_2000 VALUES LESS THAN (TO_DATE('2001-07-01','YYYY-MM-DD')),
PARTITION SALES_DEFAULT VALUES LESS THAN (MAXVALUE)
);



GLOBAL NON-PARTITIONED INDEXES

CREATE INDEX SALESIDX_GLOBAL_NP ON SALESTAB(TIME_ID);

CREATE INDEX SALESIDX_PROMO ON SALESTAB(PROMO_ID);

INSERT INTO SALESTAB SELECT * FROM SALES;


COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','SALESTAB');

SELECT TABLE_NAME,PARTITION_NAME,NUM_ROWS FROM
USER_IND_PARTITIONS WHERE TABLE_NAME='SALESTAB';



SELECT TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS
FROM USER_TAB_SUBPARTITIONS WHERE TABLE_NAME='SALESIOT';

SELECT INDEX_NAME,PARTITION_NAME,NUM_ROWS,STATUS FROM USER_IND_PARTITIONS WHERE
INDEX_NAME='SALESIDX_LOCAL';


CREATE TABLE SCOTT.SALESTAB
( PROD_ID NUMBER ,
CUST_ID NUMBER ,
TIME_ID DATE ,
CHANNEL_ID CHAR(1) ,
PROMO_ID NUMBER ,
QUANTITY_SOLD NUMBER(10,2),
AMOUNT_SOLD NUMBER(10,2)
)
PARTITION BY RANGE (TIME_ID)
(
PARTITION SALES_1998 VALUES LESS THAN (TO_DATE('1999-01-01','YYYY-MM-DD')),
PARTITION SALES_1999 VALUES LESS THAN (TO_DATE('2000-01-01', 'YYYY-MM-DD')),
PARTITION SALES_2000 VALUES LESS THAN (TO_DATE('2001-07-01','YYYY-MM-DD')),
PARTITION SALES_DEFT VALUES LESS THAN (MAXVALUE)
);



SELECT INDEX_NAME,INDEX_TYPE,STATUS FROM USER_INDEXES WHERE
STATUS='UNUSABLE';



SQL> ALTER TABLE SALESTAB TRUNCATE PARTITION SALES_1998;
Table truncated.



SQL> ALTER TABLE SALESTAB DROP PARTITION SALES_1999;
Table altered.



SQL> SELECT INDEX_NAME,INDEX_TYPE,STATUS FROM USER_INDEXES
WHERE STATUS<>'VALID';



INDEX_NAME INDEX_TYPE STATUS
------------------------------ --------------------------- --------
SALESIDX_PROMO NORMAL UNUSABLE
PK_CONS_SALESTAB NORMAL UNUSABLE



SQL> ALTER INDEX SALESIDX_PROMO REBUILD;
Index altered.



SQL> ALTER INDEX PK_CONS_SALESTAB REBUILD;
Index altered.



SQL> SELECT INDEX_NAME,PARTITION_NAME,NUM_ROWS,STATUS FROM
USER_IND_PARTITIONS WHERE INDEX_NAME='SALESIDX_LOCAL';



INDEX_NAME PARTITION_NAME NUM_ROWS STATUS
------------------------------ ------------------------------ ----------
--------
SALESIDX_LOCAL SALES_1998_IDX 178834 USABLE
SALESIDX_LOCAL SALES_1999_IDX 247945 USABLE
SALESIDX_LOCAL SALES_2000_IDX 356546 USABLE
SALESIDX_LOCAL SALES_DEFT_IDX 135518 USABLE



SQL> ALTER TABLE SALESTAB TRUNCATE PARTITION SALES_2000 UPDATE
GLOBAL INDEXES;
Table truncated.



SQL> ALTER TABLE SALESTAB DROP PARTITION SALES_2000 UPDATE
GLOBAL INDEXES;
Table altered.



SQL> SELECT INDEX_NAME,INDEX_TYPE,STATUS FROM USER_INDEXES
WHERE STATUS='UNUSABLE';
no rows selected



ADD PARTITION



SQL> ALTER TABLE SCOTT.SALESTAB ADD PARTITION SALES_DEFT
VALUES LESS THAN (MAXVALUE);
Table altered.


EXCHANGE



SQL> CREATE TABLE SALESTAB_SALES_1998 AS SELECT * FROM
SALESTAB WHERE 1=2;
Table created.

SQL> ALTER TABLE SALESTAB EXCHANGE PARTITION SALES_1998 WITH TABLE
SALESTAB_SALES_1998;
Table altered.

MERGE



ALTER TABLE SALESTAB MERGE PARTITIONS SALES_1998, SALES_1999
INTO PARTITION SALES_1999;

SQL> alter index SALESIDX_PROMO rebuild;
Index altered.

SQL> ALTER INDEX SALESIDX_LOCAL REBUILD PARTITION SALES_1999_IDX;
Index altered.

SPLIT



SQL> ALTER TABLE SALESTAB SPLIT PARTITION SALES_1999 AT (TO_DATE('1999-01-01','YYYY-MM-DD'))
INTO (PARTITION SALES_1998 ,PARTITION SALES_1999);
Table altered.

ALTER TABLE SALESTAB SPLIT PARTITION SALES_DEFT AT (TO_DATE('2001-01-01','YYYY-MM-DD'))
INTO (PARTITION SALES_2001 ,PARTITION SALES_DEFT);

MOVE



SQL> ALTER TABLE SALESTAB MOVE PARTITION SALES_1998 TABLESPACE
TESTDATA3;
Table altered.

SQL> alter index SALESIDX_LOCAL rebuild partition SALES_1998;
Index altered.



SQL> alter index SALESIDX_PROMO rebuild;
Index altered.


REDEFINE PARTITION ONLINE



BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE (
UNAME => 'SCOTT',
ORIG_TABLE => 'SALESTAB',
INT_TABLE => 'SALESTAB_NEW',
PART_NAME => 'SALES_2000'
);
END;
/



BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE (
UNAME => 'SCOTT',
ORIG_TABLE => 'SALESTAB',
INT_TABLE => 'SALESTAB_NEW',
PART_NAME => 'SALES_2000'
);
END;
/

BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE (
UNAME => 'SCOTT',
ORIG_TABLE => 'SALESTAB',
INT_TABLE => 'SALESTAB_NEW',
PART_NAME => 'SALES_2000'
);
END;
/




Tuesday, June 3, 2008

Transparent Data Encryption

Encrypt the sensitive data in the database using transparent data encrption , when an insert happens on column which is protected, the database transparently encrypts it and stores it in the column and similarly the database decrypts when selected


Specify Wallet location in sqlnet.ora


ENCRYPTION_WALLET_LOCATION =

(SOURCE=

(METHOD=file)

(METHOD_DATA=

(DIRECTORY=G:\oracle11g\wallet)))


Create the wallet and set the password to access it.


alter system set encryption key authenticated by "wallet";

Open the wallet must be opened explicitly for operation, Whenever database is opened the wallet also need to be opened using the same password


alter system set encryption wallet open authenticated by "wallet";


Close the wallet using


alter system set encryption wallet close;


Use the below commands to encrypt sal column on emp table


alter table emp modify(sal encrypt);


For indexing a column which has been encrypted, we need to remove salt from the column.


SQL> create index idx_emp on emp(sal);

create index idx_emp on emp(sal)

*ERROR at line 1:ORA-28338: cannot encrypt indexed column(s) with salt

SQL> alter table emp modify sal encrypt no salt;
Table altered.

SQL> create index idx_emp on emp(sal);
Index created.


To decrypt the column Use


SQL> alter table emp modify(sal decrypt);
Table altered.

Exporting Encrypted data


To export data from emp with encrypted sal column, the data in the resulting dump file will be in clear text even the encrypted column data with exception in the logfile


>expdp scott/tiger dumpfile=scott.dmp logfile=scott.log directory=dumpdir
Export: Release 11.1.0.6.0 - Production on Tuesday, 03 June, 2008 21:55:27
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsStarting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** dumpfile=scott.dmp logfile=scott.log directory=dumpdirEstimate in progress using BLOCKS method...Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATATotal estimation using BLOCKS method: 192 KBProcessing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMAProcessing object type SCHEMA_EXPORT/TABLE/TABLEProcessing object type SCHEMA_EXPORT/TABLE/INDEX/INDEXProcessing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINTProcessing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICSProcessing object type SCHEMA_EXPORT/TABLE/COMMENTProcessing object type SCHEMA_EXPORT/VIEW/VIEWProcessing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANTProcessing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINTProcessing object type SCHEMA_EXPORT/TABLE/TRIGGERProcessing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICSProcessing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA. . exported "SCOTT"."DEPT" 5.937 KB 4 rows. . exported "SCOTT"."EMP" 8.609 KB 15 rows. . exported "SCOTT"."SALGRADE" 5.867 KB 5 rows. . exported "SCOTT"."BONUS" 0 KB 0 rowsORA-39173: Encrypted data has been stored unencrypted in dump file set.Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded******************************************************************************Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is: G:\ORACLE11G\BACKUP\SCOTT.DMPJob "SCOTT"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at 21:56:53


Use ENCRPTION_PASSWORD parameter in expdp to export with the data encryption and while import the same password needs to be provided


>expdp scott/tiger dumpfile=scott.dmp logfile=scott.log directory=dumpdir encryption_password=mywallet
Export: Release 11.1.0.6.0 - Production on Tuesday, 03 June, 2008 22:10:57
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsStarting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** dumpfile=scott.dmp logfile=scott.log directory=dumpdir encryption_password=********Estimate in progress using BLOCKS method...Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATATotal estimation using BLOCKS method: 192 KBProcessing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMAProcessing object type SCHEMA_EXPORT/TABLE/TABLEProcessing object type SCHEMA_EXPORT/TABLE/INDEX/INDEXProcessing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINTProcessing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICSProcessing object type SCHEMA_EXPORT/TABLE/COMMENTProcessing object type SCHEMA_EXPORT/VIEW/VIEWProcessing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANTProcessing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINTProcessing object type SCHEMA_EXPORT/TABLE/TRIGGERProcessing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICSProcessing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA. . exported "SCOTT"."DEPT" 5.945 KB 4 rows. . exported "SCOTT"."EMP" 8.617 KB 15 rows. . exported "SCOTT"."SALGRADE" 5.875 KB 5 rows. . exported "SCOTT"."BONUS" 0 KB 0 rowsMaster table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded******************************************************************************Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is: G:\ORACLE11G\BACKUP\SCOTT.DMPJob "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 22:12:09