Execution Plan
About
An Execution Plan in Oracle describes the sequence of operations the Oracle optimizer will perform to execute a SQL statement. It includes steps such as table access, index usage, joins, filters, and sort operations.
The plan reflects:
The order in which Oracle accesses objects
The methods used (e.g., full scan, index scan)
The cost associated with each operation
The estimated number of rows processed at each step
It is generated by the Cost-Based Optimizer (CBO) based on object statistics, available indexes, and query structure.
Why Execution Plans Matter
Understanding the execution plan is critical for:
Identifying performance issues
Determining whether indexes are used
Detecting unnecessary full table scans or sorts
Choosing better join orders or access paths
Verifying if query rewrites (e.g. subquery to join) are effective
How to Generate Execution Plans
a. EXPLAIN PLAN
Command
EXPLAIN PLAN
CommandEXPLAIN PLAN FOR
SELECT * FROM employees WHERE department_id = 10;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
b. AUTOTRACE
in SQL*Plus or SQL Developer
AUTOTRACE
in SQL*Plus or SQL DeveloperSET AUTOTRACE ON
SELECT * FROM employees WHERE department_id = 10;
c. DBMS_XPLAN.DISPLAY_CURSOR
for Real Execution
DBMS_XPLAN.DISPLAY_CURSOR
for Real ExecutionTo view the actual plan after running a query:
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null, null, 'ALLSTATS LAST'));
Use this for getting real runtime statistics, not just estimates.
Understanding Execution Plan Output
1. Hierarchical Step IDs (ID and PARENT_ID)
Each row in the execution plan represents an operation, assigned an ID.
The plan is structured in a hierarchical tree, where parent-child relationships define the sequence of operations.
Leaf nodes perform actual data access (e.g., table scan), while higher nodes represent combinations (e.g., joins, sorts).
The right-most leaf operations are executed first.
Example structure:
| Id | Operation | Object |
|----|------------------------|---------------|
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | TABLE ACCESS FULL | EMPLOYEES |
Step 2 runs first (full scan), then step 1 aggregates, then step 0 returns the result.
2. Operation
The OPERATION
column describes the type of action Oracle performs at that step. This could be data access, a join, filtering, aggregation, or transformation.
Common operations include:
TABLE ACCESS FULL
Full table scan
INDEX RANGE SCAN
Reads index entries within a range
INDEX UNIQUE SCAN
Reads a single index entry for a unique key
NESTED LOOPS
Outer loop for each row, executes inner plan (best for small sets)
HASH JOIN
Uses hashing to join two large row sets (requires memory)
MERGE JOIN
Joins two sorted inputs (typically sorted on join keys)
SORT ORDER BY
Sorts rows for ORDER BY clause
FILTER
Applies WHERE clause filters or correlated subquery logic
VIEW
Represents a logical view being materialized or queried
3. Object (TABLE / INDEX NAME)
The OBJECT_NAME
column shows the name of the table, index, or view involved in the operation.
Important when:
Verifying that the correct index is being used.
Ensuring that Oracle is not accessing unexpected objects (e.g., a different table due to synonyms or views).
Debugging plans involving materialized views, temporary tables, or inline views.
4. Access Predicates vs Filter Predicates
Execution plans sometimes show two types of filtering conditions:
Access Predicates: Conditions that limit the rows retrieved from a table or index.
These directly influence index usage.
Example:
WHERE emp_id = 101
when an index exists onemp_id
.
Filter Predicates: Conditions applied after data has been fetched.
These may not prevent rows from being read but filter them afterward.
Example:
WHERE salary > 10000
after a full scan.
Access predicates are more efficient because they reduce data access at the source, while filter predicates are post-processing.
5. Estimated Rows (Rows
) and Selectivity
Rows
) and SelectivityThe
Rows
column shows the estimated number of rows Oracle expects to process at each step.This is based on table statistics and column histograms.
Low selectivity (e.g., a column with only a few distinct values) will result in a high row estimate.
Large errors in row estimates may indicate stale statistics, data skew, or missing histograms, leading to poor plan choices.
Understanding the expected row counts helps in:
Validating whether joins will be efficient
Judging if filters are being pushed properly
Ensuring the right join method (nested loop vs hash join) is selected
6. Cost
The Cost
column is a relative number representing Oracle’s estimate of how expensive a step is.
The optimizer calculates cost based on I/O, CPU usage, and memory consumption, not elapsed time.
Lower cost usually implies a better plan, but it's not an absolute measure.
Cost is useful for comparing two plans, not for predicting runtime.
High-cost operations are typically full scans, sorts, or joins on large tables.
7. Bytes
Represents the estimated volume of data processed at that step (in bytes).
Calculated as:
Estimated Rows * Row Size
.Helps determine memory usage and potential spill to disk for operations like sorts or hash joins.
8. Cardinality Propagation
Cardinality is the row count estimation passed between steps.
For example:
If a filter reduces a table of 1 million rows to 10,000 rows, that estimate is passed to the next operation (e.g., join).
Join methods are selected based on combined cardinalities of input datasets.
Incorrect cardinality estimates can:
Lead to bad join orders
Force Oracle to choose a nested loop when a hash join is better
Cause excessive memory usage or sorting
9. Join Methods
Three main join methods are commonly used:
Nested Loops
Small outer input, fast access to inner rows via index
Hash Join
Large unsorted data sets, memory-intensive, parallel-friendly
Merge Join
Both inputs are sorted on join keys
Choosing the wrong join method due to bad row estimates often leads to performance bottlenecks.
Last updated
Was this helpful?