Showing posts with label Performance Tuning. Show all posts
Showing posts with label Performance Tuning. Show all posts

Saturday, August 9, 2008

10046/10053 trace with tkprof

(1) Create a 10046/10053 trace file using the following steps:

$ sqlplus /nolog
connect username/password
alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;
alter session set events '10046 trace name context forever, level 12';
alter session set events '10053 trace name context forever, level 1';
-- If the query has binds define them using the correct data types and values
--variable b1 varchar2(10);
--variable b2 number;
--begin
--:b1 := 'bind value 1';
--:b2 := 42;
--end;

-- Run the query with a distinct comment /* comment_ */ to force a hard parse

select /* mycomm_2k08 */ empno,ename from emp where empno=10001;

select 'close the cursor' from dual;
exec dbms_session.reset_package;
alter session set events '10046 trace name context off';
alter session set events '10053 trace name context off';
quit

(2) Recreate the plan table to ensure it is the correct format for this version of the database:

$ sqlplus /nolog
connect username/password
drop table plan_table;
@?/rdbms/admin/utlxplan
quit

(3) Find the raw 10046/10053 trace file in the user dump destination and run tkprof against it:

$ tkprof .trc .tkprof sort=exeela waits=yes explain=username/password

Friday, June 20, 2008

SQL Optimizers (RBO & CBO)

Whenever you execute a SQL statement, a component of the
database known as the optimizer must decide how best
to access the data operated on by that statement. Oracle supports two
optimizers: the rule-base optimizer (which was the original), and the cost-based
optimizer.

To figure out the optimal execution path for a statement, the
optimizers consider the following:


  • The syntax you've specified for the statement
  • Any conditions that the data must satisfy (the WHERE
    clauses)
  • The database tables your statement will need to access
  • All possible indexes that can be used in retrieving
    data from the table
  • The Oracle RDBMS version
  • The current optimizer mode
  • SQL statement hints
  • All available object statistics (generated via the
    ANALYZE command)
  • The physical table location (distributed SQL)
  • INIT.ORA settings (parallel query, async I/O, etc.)

Oracle gives you a choice of two optimizing alternatives: the
predictable rule-based optimizer and the more intelligent cost-based optimizer.


Understanding the Rule-Based Optimizer


The rule-based optimizer (RBO) uses a predefined set of
precedence rules to figure out which path it will use to access the database.
The RDBMS kernel defaults to the rule-based optimizer under a number of
conditions, including:



  • OPTIMIZER_MODE = RULE is specified in your INIT.ORA
    file
  • OPTIMIZER_MODE = CHOOSE is specified in your INIT.ORA
    file, and no statistics exist for
    any table involved in the statement
  • An ALTER SESSION SET OPTIMIZER_MODE = RULE command has
    been issued
  • An ALTER SESSION SET OPTIMIZER_MODE = CHOOSE
    command has been issued, and no statistics exist for
    any
    table involved in the statement
  • The rule hint (e.g., SELECT /*+ RULE */. . .) has been
    used in the statement

Using the RBO, the optimizer chooses an execution plan based on the access paths available and the ranks of these access paths. Oracle's ranking of the access paths is heuristic. If there is more than one way to execute a SQL statement, then the RBO always uses the operation with the lower rank. Usually, operations of lower rank execute faster than those associated with constructs of higher rank.

Rule-based optimizer condition rankings

RBO Path 1: Single Row by Rowid
RBO Path 2: Single Row by Cluster Join
RBO Path 3: Single Row by Hash Cluster Key with Unique or Primary Key
RBO Path 4: Single Row by Unique or Primary Key
RBO Path 5: Clustered Join
RBO Path 6: Hash Cluster Key
RBO Path 7: Indexed Cluster Key
RBO Path 8: Composite Index
RBO Path 9: Single-Column Indexes
RBO Path 10: Bounded Range Search on Indexed Columns
RBO Path 11: Unbounded Range Search on Indexed Columns
RBO Path 12: Sort Merge Join
RBO Path 13: MAX or MIN of Indexed Column
RBO Path 14: ORDER BY on Indexed Column
RBO Path 15: Full Table Scan

http://www.oracle.com.cn/other/9ionlinedoc/server.920/a96533/rbo.htm


Understanding the Cost-Based Optimizer


The CBO determines which execution plan is most efficient by considering available access paths and by factoring in information based on statistics for the schema objects (tables or indexes) accessed by the SQL statement. The CBO also considers hints, which are optimization suggestions placed in a comment in the statement.

The CBO performs the following steps:
The optimizer generates a set of potential plans for the SQL statement based on available access paths and hints. The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes, and partitions accessed by the statement.
The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources, which includes I/O, CPU, and memory.
Serial plans with higher costs take more time to execute than those with smaller costs. When using a parallel plan, however, resource use is not directly related to elapsed time.
The optimizer compares the costs of the plans and chooses the one with the lowest cost

CBO Statistics in the Data Dictionary

The statistics used by the CBO are stored in the data dictionary. You can collect exact or estimated statistics about physical storage characteristics and data distribution in these schema objects by using the DBMS_STATS package or the ANALYZE statement.
Oracle strongly recommends that you use the DBMS_STATS package rather than ANALYZE to collect optimizer statistics. That package lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. Further, the cost-based optimizer will eventually use only statistics that have been collected by DBMS_STATS. However, you must use the ANALYZE statement rather than DBMS_STATS for statistics collection not related to the cost-based optimizer, such as:
To use the VALIDATE or LIST CHAINED ROWS clauses To collect information on freelist blocks
To maintain the effectiveness of the CBO, you must have statistics that are representative of the data. For table columns that contain values with large variations in number of duplicates, called skewed data, you should collect histograms.
The resulting statistics provide the CBO with information about data uniqueness and distribution. Using this information, the CBO is able to compute plan costs with a high degree of accuracy. This enables the CBO to choose the best execution plan based on the least cost.

How the CBO Optimizes SQL Statements for Fast Response


The CBO can optimize a SQL statement for fast response when the parameter OPTIMIZER_MODE is set to FIRST_ROWS_n, where n is 1, 10, 100, or 1000, or FIRST_ROWS. A hint FIRST_ROWS(n), where n is any positive integer, or FIRST_ROWS can be used to optimize an individual SQL statement for fast response.
Fast-response optimization is suitable for online users, such as those using Oracle Forms or Web access. Typically, online users are interested in seeing the first few rows and seldom look at the entire query result, especially when the result size is large. For such users, it makes sense to optimize the query to produce the first few rows as quickly as possible, even if the time to produce the entire query result is not minimized.
With fast-response optimization, the CBO generates a plan with the lowest cost to produce the first row or the first few rows. The CBO employs two different fast-response optimizations, referred to here as the old and new methods. The old method is used with the FIRST_ROWS hint or parameter value. With the old method, the CBO uses a mixture of costs and rules to produce a plan. It is retained for backward compatibility reasons.
The new method is totally based on costs, and it is sensitive to the value of n. With small values of n, the CBO tends to generate plans that consist of nested loop joins with index lookups. With large values of n, the CBO tends to generate plans that consist of hash joins and full table scans.
The value of n should be chosen based on the online user requirement and depends specifically on how the result is displayed to the user. Generally, Oracle Forms users see the result one row at a time and they are typically interested in seeing the first few screens. Other online users see the result one group of rows at a time.
With the fast-response method, the CBO explores different plans and computes the cost to produce the first n rows for each. It picks the plan that produces the first n rows at lowest cost. Remember that with fast-response optimization, a plan that produces the first n rows at lowest cost might not be the optimal plan to produce the entire result. If the requirement is to obtain the entire result of a query, then fast-response optimization should not be used. Instead use the ALL_ROWS parameter value or hint.

SQL Tuning Tips

    TIP 1 : SQL cannot be shared within Oracle unless it is absolutely identical. Statements must have match exactly in case, white space and underlying schema objects to be shared within Oracle's memory. Oracle avoids the parsing step for each subsequent use of an identical statement.

    sql> SELECT NAME FROM S_CUSTOMER WHERE ID = 212; statement to match
    sql> SELECT NAME FROM s_customer WHERE ID = 212; lower case
    sql> SELECT NAME FROM S_CUSTOMER WHERE ID=212; white space
    sql> SELECT NAME
    FROM S_CUSTOMER
    WHERE ID=212;
    white space

    • Use SQL standards within an application. Rules like the following are easy to implement and will allow more sharing within Oracle's memory.

        - Using a single case for all SQL verbs
        - Beginning all SQL verbs on a new line
        - Right or left aligning verbs within the initial SQL verb
        - Separating all words with a single space


    • Use bind variables. The values of bind variables do not need to be the same for two statements to be considered identical. Bind variables are not substituted until a statement has been successfully parsed.

    Sharable SQL SELECT * FROM emp WHERE emp_no = :B1; Bind value: 123
    SELECT * FROM emp WHERE emp_no = :B1; Bind value: 987
    Non-sharable SQL SELECT * FROM emp WHERE emp_no = 123;
    SELECT * FROM emp WHERE emp_no = 987;


    • Use a standard approach to table aliases. If two identical SQL statements vary because an identical table has two different aliases, then the SQL is different and will not be shared.

    • Use table aliases and prefix all column names by their aliases when more than one table is involved in a query. This reduces parse time AND prevents future syntax errors if someone adds a column to one of the tables with the same name as a column in another table. (ORA-00918: COLUMN AMBIGUOUSLY DEFINED)

    TIP 2: Beware of WHERE clauses which do not use indexes at all. Even if there is an index over a column that is referenced by a WHERE clause included in this section, Oracle will ignore the index. All of these WHERE clauses can be re-written to use an index while returning the same values. In other words, don't perform operations on database objects referenced in the WHERE clause.

    Do Not Use Use
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE SUBSTR(account_name,1,7) = 'CAPITAL';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name LIKE 'CAPITAL%';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name = NVL ( :acc_name, account_name);
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name LIKE NVL ( :acc_name, '%');
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE TRUNC (trans_date) = TRUNC (SYSDATE);
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE trans_date
    BETWEEN TRUNC (SYSDATE) AND TRUNC (SYSDATE) + .99999;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name account_type = 'AMEXA';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE account_name = 'AMEX'
    AND account_type = 'A';
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount + 3000 < 5000;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE
    amount < 2000;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    != 0;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    > 0;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    NOT = 0;
    SELECT account_name, trans_date, amount
    FROM transaction
    WHERE amount
    > 0;

    TIP 3: Don't forget to tune views. Views are SELECT statements and can be tuned in just the same way as any other type of SELECT statement can be. All tuning applicable to any SQL statement are equally applicable to views.

    TIP 4: Avoid including a HAVING clause in SELECT statements. The HAVING clause filters selected rows only after all rows have been fetched. Using a WHERE clause helps reduce overheads in sorting, summing, etc. HAVING clauses should only be used when columns with summary operations applied to them are restricted by the clause.

    Do Not Use Use
    SELECT region, AVG (loc_size)
    FROM location
    GROUP BY region
    HAVING region != 'SYDNEY'
    AND region != 'PERTH';
    SELECT region, AVG (loc_size)
    FROM location
    WHERE region != 'SYDNEY'
    AND region != 'PERTH';
    GROUP BY region;

    TIP 5: Minimize the number of table lookups (subquery blocks) in queries, particularly if your statements include subquery SELECTs or multicolumn UPDATEs.

    Separate
    Subqueries
    SELECT emp_name
    FROM emp
    WHERE emp_cat = (SELECT MAX (category)
    FROM emp_categories)
    AND emp_range = (SELECT MAX (sal_range)
    FROM emp_categories)
    AND emp_dept = 0020;
    Combined
    Subqueries
    SELECT emp_name
    FROM emp
    WHERE (emp_cat, sal_range)
    = (SELECT MAX (category), MAX (sal_range)
    FROM emp_categories)
    AND emp_dept = 0020;

    TIP 6: Consider the alternatives EXISTS, IN and table joins when doing multiple table joins. None of these are consistently faster; it depends on your data. If there is a poor performer here, it's likely the IN clause.

    (Note, this query returns the employee names from each department in department category 'A'.)

    SELECT emp_name
    FROM emp E
    WHERE EXISTS ( SELECT 'X'
    FROM dept
    WHERE dept_no = E.dept_no
    AND dept_cat = 'A');
    SELECT emp_name
    FROM emp E
    WHERE dept_no IN ( SELECT dept_no
    FROM dept
    WHERE dept_no = E.dept_no
    AND dept_cat = 'A');
    SELECT emp_name
    FROM dept D, emp E
    WHERE E.dept_no = D.dept_no
    AND D.dept_cat = 'A';

    TIP 7: Avoid joins that require the DISTINCT qualifier on the SELECT list in queries which are used to determine information at the owner end of a one-to-many relationship. The DISTINCT operator causes Oracle to fetch all rows satisfying the table join and then sort and filter out duplicate values. EXISTS is a faster alternative, because the Oracle optimizer realizes when the subquery has been satisfied once, there is no need to proceed further and the next matching row can be fetched.

    (Note: This query returns all department numbers and names which have at least one employee.)

    Do Not Use Use
    SELECT DISTINCT dept_no, dept_name
    FROM dept D,
    emp E
    WHERE D.dept_no = E.dept_no;
    SELECT dept_no, dept_name
    FROM dept D
    WHERE EXISTS (
    SELECT 'X'
    FROM emp E
    WHERE E.dept_no = D.dept_no);

    TIP 8: Consider whether a UNION ALL will suffice in place of a UNION. The UNION clause forces all rows returned by each portion of the UNION to be sorted and merged and duplicates to be filtered before the first row is returned. A UNION ALL simply returns all rows including duplicates and does not have to perform any sort, merge or filter. If your tables are mutually exclusive (include no duplicate records), or you don't care if duplicates are returned, the UNION ALL is much more efficient.

    UNION UNION ALL
    SELECT acct_num, balance_amt
    FROM debit_transactions
    WHERE tran_date = '31-DEC-95'
    UNION
    SELECT acct_num, balance_amt
    FROM credit_transactions
    WHERE tran_date = '31-DEC-95';
    SELECT acct_num, balance_amt
    FROM debit_transactions
    WHERE tran_date = '31-DEC-95'
    UNION ALL
    SELECT acct_num, balance_amt
    FROM credit_transactions
    WHERE tran_date = '31-DEC-95';

    TIP 9: Consider using DECODE to avoid having to scan the same rows repetitively or join the same table repetitively. Note, DECODE is not necessarily faster as it depends on your data and the complexity of the resulting query. Also, using DECODE requires you to change your code when new values are allowed in the field.

    SELECT COUNT(*)
    FROM emp
    WHERE status = 'Y'
    AND emp_name LIKE 'SMITH%';
    ----------
    SELECT COUNT(*)
    FROM emp
    WHERE status = 'N'
    AND emp_name LIKE 'SMITH%';
    SELECT COUNT(DECODE(status, 'Y', 'X', NULL)) Y_count,
    COUNT(DECODE(status, 'N', 'X', NULL)) N_count
    FROM emp
    WHERE emp_name LIKE 'SMITH%';

    TIP 10: Oracle automatically performs simple column type conversions (or casting) when it compares columns of different types. Depending on the type of conversion, indexes may not be used. Make sure you declare your program variables as the same type as your Oracle columns, if the type is supported in the programming language you are using.

    Datatype
    of
    field in
    where
    clause
    Your Query After Implicit Conversion Index
    Used?
    emp_no
    indexed
    numeric
    SELECT ...
    FROM emp
    WHERE emp_no = '123';
    SELECT ...
    FROM emp
    WHERE emp_no = TO_NUMBER('123');
    YES
    emp_type
    indexed
    varchar2
    SELECT ...
    FROM emp
    WHERE emp_type = 123;
    SELECT ...
    FROM emp
    WHERE TO_NUMBER (emp_type) = 123;
    NO!

Predicate Selectivity

Purpose
~~~~~~~

This article explains how the CBO determines the selectivity for various
predicates.

Audience
~~~~~~~~
This article is aimed at Application Developers, DBA's and support analysts
interested in understanding how the CBO determines how selective a
particular access path is.

Selectivity
~~~~~~~~~~~
Selectivity is a measure of the proportion of a row source retrieved by
application of a particular predicate or combination of predicates.

Within the Oracle kernel it is expressed as a value between 0 and 1.
The closer the value is to 0 the more selective the predicate is.
Selectivity is only used by the CBO.

Basic Selectivity formula:
~~~~~~~~~~~~~~~~~~~~~~~~~~

Number of records satisfying a condition
Selectivity = -----------------------------------------
Total Number of records

In the optimizer, selectivity is used to compare the usefulness of various
predicates in combination with base object costs.

Knowing the proportion of the total data set that a column predicate defines
is very helpful in defining actual access costs.

By default, column selectivity is based on the high and low values and the
number of values in the column with an assumption of even distribution of
data between these two points.

Histogram data can give better selectivity estimates for unevenly distributed

column data. There is more discussion regarding Histograms later.

Selectivity is also used to define the cardinality of a particular row source
once predicates have been applied. Cardinality is the expected number of rows

that will be retrieved from a row source. Cardinality is useful in determining

nested loop join and sort costs. Application of selectivity to the original
cardinality of the row source will produce the expected (computed) cardinality

for the row source.


Glossary of Terms:
~~~~~~~~~~~~~~~~~~

NDV Number of Distinct Values
Cardinality Number of rows
Selectivity Proportion of a dataset returned by a particular predicate(or
group of predicates)

In the following illustrations there are 2 tables (T1 & T2) with columns (c1)

Selectivities:
~~~~~~~~~~~~~~
Without histograms
~~~~~~~~~~~~~~~~~~
c1 = '4076' 1/NDV
c1 > '4076' 1 - (High - Value / High - Low)
c1 >= '4076' 1 - (High - Value / High - Low) + 1/NDV
c1 like '4076' 1/NDV

Join selectivity
~~~~~~~~~~~~~~~~

The selectivity of a join is defined as the selectivity of the most selective

join column adjusted by the proportion of not null values in each join column.


Sel = 1/max[NDV(t1.c1),NDV(t2.c2)] *
( (Card t1 - # t1.c1 NULLs) / Card t1) *
( (Card t2 - # t2.c2 NULLs) / Card t2)

Bind Variable selectivity
~~~~~~~~~~~~~~~~~~~~~~~~~

Bind variables present a special case because the optimizer has no idea what
the bind variable value is prior to query optimization. This does not present
a problem with equality predicates since a uniform distribution of data is
assumed and the selectivity is taken as 1/NDV for the column. However for
range predicates it presents a major issue because the optimizer does not
know where the range starts or stops. Because of this the optimizer has to
make some assumptions as follows:

c1 = :bind1 1/NDV
c1 > :bind1 Default of 5%
c1 >= :bind1 Default of 5%
c1 like :bind1 Default of 25%

For more information on bind variables see Note 70075.1

Selectivity With Histograms
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Histograms provide additional information about column selectivity for
columns whose distribution is non uniform. Histograms store information about
column data value ranges. Each range is stored in a single row and is often
called a 'bucket'. There are 2 different methods for storing histograms in
Oracle. If there are a small number of distinct
column values (i.e. less than the number of buckets), the column value
and the count of that value is stored. If not then a series of endpoints
are stored to enable more accurate selectivity to be determined.

The first method allows the accurate figures to be used. However with
inexact histograms the terms popular and non-popular value are introduced
and are used to help determine selectivity. A popular value is a value that
spans multiple endpoints whereas a non-popular value does not.
See Note 72539.1 for more information on histograms.
Exact histograms
~~~~~~~~~~~~~~~~
c1 = '4706' count of value '4076' / Total Number of Rows
c1 > value count of values > '4076' / Total Number of Rows

InExact Histograms
~~~~~~~~~~~~~~~~~~
col = pop value # popular buckets / # buckets
col = non pop (Density)
col > value # buckets > value / # buckets


Rules for combining selectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Let P1 and P2 be 2 distinct predicates of query Q

P1 AND P2
S(P1&P2) = S(P1) * S(P2)
P1 OR P2
S(P1|P2) = S(P1) + S(P2) -[S(P1) * S(P2)]

Index Selectivity for concatenated indexes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Starting with 10.2, when a concatenated index, with all its columns having
equality predicates, is used as an access path, the optimizer uses 1/NDK as
the selectivity (where NDK is the number of distinct keys in the index).

On 9.2.0.7 and 9.2.0.8 this may be enabled with event 38060 level 1.
On 10.1.0.4 and 10.1.0.5 this may be enabled with event 38059 level 1.
On 10.2 adjustments will be made to the selectivity to account for nulls
in the index keys. This also occurs on 10.1.0.5 (with event 38059.)


Join cardinality
~~~~~~~~~~~~~~~~

Card(Pj) = Card(T1) * Card(T2) * Sel(Pj)
 

Bind variables in queries

Purpose
~~~~~~~
This article is intended to provide more information on the usage of bind
variables in queries.

Scope & Application
~~~~~~~~~~~~~~~~~~~
This article is aimed at application designers/users who may not understand
some of the implications for them and the database in the usage of bind
variables.

Bind variables
~~~~~~~~~~~~~~

Bind variables are place holders for query input values. They are a pointer to
a memory location where data value(s) will be placed.

Note that the prescence of bind variables has no effect on queries that are
optimised using the RBO. They only affect CBO query optimization because the
CBO attempts to use column value information to determine the optimal access
path for the query.
When no values are supplied, the CBO may make a sub-optimal plan choice.

Advantages of bind variables:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o When a bind variable as opposed to a hardcoded value is placed in a query,
the query code does not have to change each time the query is run. This means
that the code does not need to be reparsed and can be shared between sessions
and you do not need to maintain a copy of the statement for each value used
in the query. The effect of this is to reduce the amount of space used in the

shared pool to store almost identical copies of sql statements.

NB sharing also depends on other factors e.g.
o identical objects and object owners must be referenced
o bind variables must have the same datatype
o etc.

Disadvantages of bind variables:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o When a SQL statement is optimized, the optimizer is unable to use the current

bind value. If it did then the plan chosen for that value may be excessively
poor for other values. Also the plan chosen would depend on which value was
supplied first. Because of this the optimizer must either choose the average
selectivity fo that column (the density) or use defaults. This may result in
the generation of a sub-optimal plan.

The CBO is unable to determine accurate selectivities for range predicate
containing bind variables. The CBO uses column value data to adjust
selectivities. If it does not have any data values to do this with
(such as if bind variables are used) then this is not possible and assumptions

have to be made.
For queries with range predicates using bind variables, we have no way of
calculating the selectivity, so we use a hardcoded default value of 5%
This is true irrespective of histograms as CBO does not know the value of
the bind variable.

Selectivity for bind variables with 'like' predicates defaults to 25%

Range Predicate Example:
~~~~~~~~~~~~~~~~~~~~~~~~
SELECT ename FROM emp WHERE empno > 9999;
SELECT ename FROM emp WHERE empno > :bind1;

Assuming the table has been analyzed, CBO knows the HIGH and LOW values for
empno and that the values are evenly distributed between these points.
For the first statement, CBO can determine the selectivity for the
where clause 'where empno >9999' - it uses the assumption that values
are evenly distributed to enable it to estimate the number of values between
the supplied value and the HIGH value.

For the second statement, it does not know what the value of :bind1 is,
so it is unable to use the same assumption and uses the default selectivity
of 5%.

It is possible to test the affect of using bind variables as opposed to literals
by setting up a variable in sqlplus. You can also assign a value to the
variable.
Setup details for a numeric bind variable called bindvar:

variable bindvar number;
begin
:bindvar:=10;
end;
/
SELECT * FROM emp WHERE deptno = :bindvar;

Bind variable selectivities:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See Note 68992.1

Advice
~~~~~~

o Use bind variables for queries which are executed many times
e.g. OLTP environments
o Use literals where queries are not repeatedly executed and where the query
is sensitive to column sectivities.
o If bind variables are being used then use hints to force the desired access
path.
o Balance the use of bind variables and shareable code with the need to
produce accurate plans.