Monday, July 19, 2010

How to Find Out How Much Space an Index is Using

PURPOSE
This article provides select statements to show the actual usage of blocks
within an index. This gives an idea of how 'full' an index is and allows
a dba to adjust next extent sizes etc.

SCOPE & APPLICATION
For DBA's requiring to understand how much space is currently being used
within an index.

In order to gather the necessary index statistics, you need to 'validate
the structure' of the index. This is achieved using the following command:

analyze index validate structure;

NOTE: In Oracle V6 the validate index command performed the same function.

Analyze index validate structure checks the structure of the index and
populates a table called index_stats. The command (in this form) does not
gather statistics for use by the Cost Based Optimizer. The index_stats table
can only hold 1 row at a time. So, if you are planning to store index data for
multiple indexes or for historical comparison purposes, you will need to
insert the data into a more permanent table. Statistics are also lost at the
end of each session.

This analyze command only checks the structure of the index and populates the
index_stats table. It does not gather statistics for use by the Cost Based
Optimizer.

Once you have gathered the statistics, they can be retrieved as follows:

column name format a15
column blocks heading "ALLOCATED|BLOCKS"
column lf_blks heading "LEAF|BLOCKS"
column br_blks heading "BRANCH|BLOCKS"
column Empty heading "UNUSED|BLOCKS"
select name,
blocks,
lf_blks,
br_blks,
blocks-(lf_blks+br_blks) empty
from index_stats;

Example output
==============

SQL> analyze index draw1 validate structure;

Index analyzed.

SQL> select name, blocks, lf_blks, br_blks, blocks-(lf_blks+br_blks) empty
2 from index_stats;

ALLOCATED LEAF BRANCH UNUSED
NAME BLOCKS BLOCKS BLOCKS BLOCKS
--------------- ---------- ---------- ---------- ----------
DRAW1 433 426 6 1

It is also possible to gather statistics on the use of space within the
btree itself:

select name,
btree_space,
used_space,
pct_used
from index_stats;

Example output
==============

SQL> select name, btree_space, used_space, pct_used
2 from index_stats;

NAME BTREE_SPACE USED_SPACE PCT_USED
--------------- ----------- ---------- ----------
DRAW1 810624 236284 30

--------------------------------------------------------------------------
Oracle Worldwide Customer Support

Using TKProf to compare actual and predicted row counts

Purpose


Explain how TKProf can be used to highlight differences in predicated and actual row counts to identify potential optimizer problems.
Scope and Application


Customers and support analysts interesting in Query tuning.
Using TKProf to compare actual and predicted row counts


SQL_TRACE combined with TKProf can provide some very useful information to assist with the tunin queries. There are numerous other examples that detail the main sections of a TKProf report. Especially useful for tuning SQL is the actual row counts as these can be compared against the predicted row counts. Expected row counts (cardinalities e.g. Card=12345) can be found in explain plans for queries optimized using the Cost Based Optimizer (CBO). Actual row counts can be found in tkprof output or in the STAT lines from raw <> (or <> output ). Any anomalies can then be investigated. For example, if the actual number of rows returned by a particular explain plan step differs significantly from the CBO's cardinality estimates, then it is possible that this is a source of a bad plan choice. The cause of the incorrect statistics can be investigated and corrected.
Consider the following statement:

SELECT ename FROM emp e, dept d WHERE e.deptno=d.deptno;
The execution plan from autotrace (containing cost and cardinality information) can be compared with the actual row counts displayed against the raw trace file or the tkprof explain plan:

Autotrace output:

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=14 Bytes=126)
1 0 HASH JOIN (Cost=5 Card=14 Bytes=126)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=98)
3 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=4 Bytes=8)

The autotrace output shows the expected cardinality figures (Card=...)for each step in the query. These values are calculated using the statistics gathered on the objects with modifications according to the query predicates. The cardinalities can be compared with the actual figures as collected by SQL_TRACE:

Raw trace file:

STAT #1 id=1 cnt=14 pid=0 pos=1 obj=0 op='HASH JOIN '
STAT #1 id=2 cnt=14 pid=1 pos=1 obj=44913 op='TABLE ACCESS FULL EMP '
STAT #1 id=3 cnt=4 pid=1 pos=2 obj=42480 op='TABLE ACCESS FULL DEPT '
The cnt value in the relevant 'STAT' line for the query shows the row count for each step of the recorded plan (the number immediately after the STAT # indicates the cursor that the statisitcs refer to). This plan is formatted by TKProf to give the following output:

Formatted tkprof explain plan:

Rows Row Source Operation
------- ---------------------------------------------------
14 HASH JOIN
14 TABLE ACCESS FULL EMP
4 TABLE ACCESS FULL DEPT

The cardinality and row figures for each step should be the same. If they are not then this may indicate thats there is a problem in the way the base statistics have been gathered (no data sampling, insufficient sample size, skewed data etc) or some deficiency in the cardinality computation method.

The Cost Based optimizer has a few limitations to its model that affects queries in a small number of cases. These limitations are documented in the following article: Note:212809.1 Limitations of the Oracle Cost Based Optimizer

Related Documents


Note:212809.1 Limitations of the Oracle Cost Based Optimizer
and predicted row counts
Note:43214.1 Autotrace usage
Note:163563.1 Resolving Query Tuning Issues
Note:62160.1 Tracing Sessions in Oracle7/8
Note:39817.1 Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output
Note:199081.1 Overview Reference for SQL_TRACE, TKProf and Explain Plan

Data Guard Wait Events

This note describes the wait events that monitor the performance of the log
transport modes that were specified on the primary database with the ARCH, LGWR,
SYNC, and ASYNC attributes on the LOG_ARCHIVE_DEST_n initialization parameter.
The wait events and associated timing information are displayed by the
V$SYSTEM_EVENT view, as follows:


Wait events for standby destinations configured with the ARCH attribute:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

"ARCH wait on ATTACH"

This wait event monitors the amount of time spent by all archiver
processes to spawn an RFS connection.

"ARCH wait on SENDREQ"

This wait event monitors the amount of time spent by all archiver
processes to write the received redo to disk as well as open and close
the remote archived redo logs.

"ARCH wait on DETACH"

This wait event monitors the amount of time spent by all archiver
processes to delete an RFS connection.


Wait events for standby destinations configured with the LGWR SYNC attributes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

"LGWR wait on ATTACH"

This wait event monitors the amount of time spent by all log writer
processes to spawn an RFS connection.

"LGWR wait on SENDREQ"

This wait event monitors the amount of time spent by all log writer
processes to write the received redo to disk as well as open and close
the remote archived redo logs.

"LGWR wait on DETACH"

This wait event monitors the amount of time spent by all log writer
processes to delete an RFS connection.


Wait events for standby destinations configured with the LGWR ASYNC attributes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

"LNS wait on ATTACH"

This wait event monitors the amount of time spent by all network servers
to spawn an RFS connection.

"LNS wait on SENDREQ"

This wait event monitors the amount of time spent by all network servers
to write the received redo to disk as well as open and close the remote
archived redo logs.

"LNS wait on DETACH"

This wait event monitors the amount of time spent by all network servers
to delete an RFS connection.

"LGWR wait on full LNS buffer"

This wait event monitors the amount of time spent by the log writer
(LGWR) process waiting for the network server (LNS) to free up ASYNC
buffer space. If buffer space has not been freed in a reasonable amount
of time, availability of the primary database is not compromised by
allowing the archiver process (ARCn) to transmit the redo log data. This
wait event is not relevant for destinations configured with the LGWR
SYNC=PARALLEL attributes.


Wait events for standby destinations configured with either the LGWR ASYNC or
LGWR SYNC=PARALLEL attributes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

"LGWR wait on LNS"

This wait event monitors the amount of time spent by the log writer (LGWR)
process waiting to receive messages on KSR channels from the network
server.

"LNS wait on LGWR"

This wait event monitors the amount of time spent by the network server
waiting to receive messages on KSR channels from the log writer (LGWR)
process.

"LGWR-LNS wait on channel"

This wait event monitors the amount of time spent by the log writer (LGWR)
process or the network server processes waiting to receive messages on
KSR channels.

Recommendations for Gathering Optimizer Statistics

In this Document
Goal
Solution
References



--------------------------------------------------------------------------------



Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.1.0.2 - Release: 10.1 to 10.1
Oracle Server - Standard Edition - Version: 10.1.0.2 to 10.1.0.2 [Release: 10.1 to 10.1]


Goal
This Document outline the recommended method to gather a standard set of optimizer statistics for use by the Cost Based Optimizer under Oracle 10g.


Solution
Quick Recreate Recommendation

To achieve a quick delete and recreate of the statistics on an individual table and it's indexes (adding column statistics for any skewed columns) and following the recommendations in this article use:

exec dbms_stats.delete_table_stats(ownname=>'user_name',-
tabname=>'table_name',cascade_indexes=>true);

exec dbms_stats.gather_table_stats(ownname=>'user_name',-
tabname=>'table_name',-
estimate_percent => 100,-
cascade=>true,-
method_opt=>'for all columns size skewonly');

For explanation of these recommendations, see below. For more usage examples see the end of this article.
Note that, from 10gR2 statistics can be restored using:

Note:452011.1 Master Note: Restoring table statistics in 10G onwards

IMPORTANT: PLEASE NOTE:

•These recommendations apply to the majority of databases.
•The recommendations aim to generate statistics with as much statistical accuracy as possible. To this end 100% sample sizes are suggested since any reduction in sample size is always a concession to accuracy. It is acknowledged that such 100% samples are potentially time consuming and consideration needs to be made to fit the statistics gathering activities within the existing maintenance window.

•Gathering new optimizer statistics should maintain or improve existing execution plans, but it is possible that some queries performance may degrade. Note that from 10gR1 previous copies of statistics are maintained by default for the last 30 days and can be restored in the case of problems. See:
Note:452011.1 Master Note: Restoring table statistics in 10G onwards
•Gathering new optimizer statistics may invalidate cursors in the shared pool so it is prudent to restrict all gathering operations execution to periods of low activity in the database, such as the scheduled maintenance windows.
•For very large systems, the gathering of statistics can be a very time consuming and resource intensive activity.In this environment sample sizes need to be carefully controlled to ensure that gathering completes within acceptable timescale and resource constraints and within the maintenance window. For guidance on this topic See:

Note:44961.1 Statistics Gathering: Frequency and Strategy Guidelines

In these environments, it is also recommended to utilise change based statistics gathering to avoid re-gathering information unnecessarily.Please see:

Note:237901.1 Gathering Schema or Database Statistics Automatically - Examples
Note:377152.1 Best Practices for automatic statistics collection on Oracle 10g


Gathering Object statistics

The Cost Based Optimizer (CBO) uses statistics to determine the execution plan for a particular query. Potentially, with reduced sample sizes, sampling could produce different statistics due to chance groupings of data that may be the result of differing loading methods etc.

On 10g it is recommended to:

•Gather statistics using scheduled statistics gathering scripts. In most cases the default scripts provide an adequate level of sampling taking into account the following recommendations:

•Use sample size that is large enough. On 10g support suggests an estimate sample size of 100% (if it is possible for this to fit within the maintenance window), even if that means that statistics are gathered on a reduced frequency. If 100% is not feasible, try using at least an estimate of 30%. Generally, the accuracy of the statistics overall outweighs the day to day changes in most applications. This setting is because the default AUTO_SAMPLE_SIZE uses a very small estimate percentage which can result in poor estimates.

•Ensure all objects (tables and indexes) have stats gathered. An easy way to achieve this is to use the CASCADE parameter.
•Ensuring that any columns with skewed data distribution have histograms collected, and at sufficient resolution using the METHOD_OPT parameter. Support recommends a conservative and more plan-stable approach of "adding a histogram only if it is known to be needed" rather than collecting column statistics on all columns. This can be achieved manually or by using the SKEWONLY option to automatically add column statistics to columns that contain data with a non-uniform distribution. Using the default column statistics setting of AUTO which means that DBMS_STATS will decide which columns to add histogram to where it believes that they may help to produce a better plan. If statistics are not completely up to date then the presence of Histograms can cause trouble when parsing values are out of range, or between values for "frequency" histograms. In these circumstances the optimizer has to make guesses which may be inaccurate and, on occasion, cause poor plans.

Note that in earlier versions the default setting for the METHOD_OPT parameter was "FOR ALL COLUMNS SIZE 1" which would collect only a high and a low value and effectively meant that there were no detailed column statistics. It is known that in some cases, the effect of a histogram is adverse to the generation of a better plan so users moving between versions may initially wish to set this parameter to its pre-upgrade release value, and later adjust to the post-upgrade release default value. See:
Note:465787.1 Managing CBO Stats during an upgrade to 10g or 11gAs ever, testing different values with the application will yield the best results.

•If partitions are in use, gather global statistics if possible due to time constraints. Global stats are very important but gathering is often avoided due to the sizes involved and length of time to required. If 100% samples are not possible then support would recommend going for a minimum of 1%. Gathering with small sample sizes (e.g. 0.001, 0.0001, 0.00001 etc. ) can be very effective but equally, a large proportion of the data will not be examined which could prove decisive to the optimizer's plan choices. Note that the available range for the ESTIMATE_PERCENT parameter is a very flexible [0.000001 -> 100] which can use very small sample sizes suitable for huge partitioned tables. Testing will reveal the most suitable settings for each system.
See:
Note:236935.1 Global statistics - An Explanation•Gather system statistics to reflect the CPU loading of the system and to improve the accuracy of the CBO's estimates by providing the CBO with CPU cost estimates in addition to the normal I/O cost estimates. See:
Note:470316.1 Using Actual System Statistics (Collected CPU and IO information Note:149560.1 Collect and Display System Statistics (CPU and IO) for CBO usage Note:153761.1 Scaling the System to Improve CBO optimizerNote that the defaults for statistics gathering on different versions of Oracle are not necessarily the same, for example:

•ESTIMATE_PERCENT: defaults:
◦ 9i : 100%
◦10g : DBMS_STATS.AUTO_SAMPLE_SIZE (using very small estimate percentage)
◦11g : DBMS_STATS.AUTO_SAMPLE_SIZE (using larger estimate percentage - 100%)
•METHOD_OPT: defaults:
◦9i : "FOR ALL COLUMNS SIZE 1" effectively no detailed column statistics.
◦10g and 11g : "FOR ALL COLUMNS SIZE AUTO" - This setting means that DBMS_STATS decides which columns to add histogram to where it believes that they may help to produce a better plan.
In 11g, using auto size for ESTIMATE_PERCENT defaults to 100% and therefore is as accurate as possible. Additionally, even though a 100% sample is collected, the gathering process is really fast since a new hashing algorithm is used to compute the statistics rather than sorting (in 9i and 10g the "slow" part was typically the sorting). In 10g the default ESTIMATE_PERCENT sample size was extremely small which often resulted in poor statistics and is therefore not recommended.

SAMPLE STATISTIC GATHERING COMMANDS

Gathering statistics an individual table
exec dbms_stats.gather_table_stats( - ownname => ' Schema_name ', - tabname => ' Table_name ', - estimate_percent => 100, - cascade => TRUE, - method_opt => 'FOR ALL COLUMNS SIZE 1' ); N.B. replace ' Schema_name ' and ' Table_name ' with the name of the schema and table to gather statistics for respectively.Gathering statistics for all objects in a schema exec dbms_stats.gather_schema_stats( - ownname => ' Schema_name ', - cascade => TRUE, - method_opt => 'FOR ALL COLUMNS SIZE 1' );N.B. replace ' Schema_name ' with the name of the desired schema. Gathering statistics for all objects in the database: exec dbms_stats.gather_database_stats( - cascade => TRUE, - method_opt => 'FOR ALL COLUMNS SIZE 1' );NOTE:

For cases where column data is known to be skewed and column statistics are known to be beneficial, Replace:

method_opt => 'FOR ALL COLUMNS SIZE 1'
with

method_opt => 'FOR ALL COLUMNS SIZE AUTO'
or with


method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY'to automatically add column statistics to columns that contain data with a non-uniform distribution.


@ dbms_stats dbms_stats dbms_stats dbms_stats dbms_stats
References
NOTE:153761.1 - System Statistics: Scaling the System to Improve CBO optimizer
NOTE:236935.1 - Global statistics - An Explanation
NOTE:237901.1 - Gathering Schema or Database Statistics Automatically in 8i and 9i - Examples
NOTE:377152.1 - Best Practices for automatic statistics collection on Oracle 10g
NOTE:388474.1 - Master Note: Recommendations for Gathering Optimizer Statistics on 9i
NOTE:44961.1 - Statistics Gathering: Frequency and Strategy Guidelines
NOTE:452011.1 - Master Note: Restoring table statistics in 10G onwards
NOTE:465787.1 - Managing CBO Stats during an upgrade to 10g or 11g
NOTE:470316.1 - Using Actual System Statistics (Collected CPU and IO information)

Automatic SQL Tuning - SQL Profiles

Clarification/Explanation
==========================

The query optimizer can sometimes produce inaccurate estimates about
an attribute of a statement due to lack of information,leading to poor
execution plans. Traditionally, users have corrected this problem by
manually adding hints to the application code to guide the optimizer
into making correct decisions. For packaged applications, changing
application code is not an option and the only alternative available
is to log a bug with the application vendor and wait for a fix.

Automatic SQL Tuning deals with this problem with its SQL Profiling
capability. The Automatic Tuning Optimizer creates a profile of the
SQL statement called a SQL Profile, consisting of auxiliary statistics
specific to that statement. The query optimizer under normal mode makes
estimates about cardinality, selectivity, and cost that can sometimes be
off by a significant amount resulting in poor execution plans. SQL Profile
addresses this problem by collecting additional information using
sampling and partial execution techniques to verify and, if necessary,
adjust these estimates.

During SQL Profiling, the Automatic Tuning Optimizer also uses execution
history information of the SQL statement to appropriately set optimizer
parameter settings, such as changing the OPTIMIZER_MODE initialization
parameter setting from ALL_ROWS to FIRST_ROWS for that SQL statement.

The output of this type of analysis is a recommendation to accept the
SQL Profile. A SQL Profile, once accepted, is stored persistently in
the data dictionary. Note that the SQL Profile is specific to a
particular query. If accepted, the optimizer under normal mode uses the
information in the SQL Profile in conjunction with regular
database statistics when generating an execution plan.
The availability of the additional information makes it possible
to produce well-tuned plans for corresponding SQL statement without
requiring any change to the application code.

The scope of a SQL Profile can be controlled by the CATEGORY profile attribute.
This attribute determines which user sessions can apply the profile.
You can view the CATEGORY attribute for a SQL Profile in CATEGORY column
of the DBA_SQL_PROFILES view. By default, all profiles are created in the
DEFAULT category. This means that all user sessions where the SQLTUNE_CATEGORY
initialization parameter is set to DEFAULT can use the profile.

By altering the category of a SQL profile, you can determine which sessions are
affected by the creation of a profile. For example, by setting the category
of a SQL Profile to DEV, only those users sessions where the SQLTUNE_CATEGORY
initialization parameter is set to DEV can use the profile. All other sessions
do not have access to the SQL Profile and execution plans for SQL statements
are not impacted by the SQL profile. This technique enables you to test
a SQL Profile in a restricted environment before making it available to other
user sessions.

It is important to note that the SQL Profile does not freeze the execution plan
of a SQL statement, as done by stored outlines. As tables grow or indexes
are created or dropped, the execution plan can change with the same SQL Profile.
The information stored in it continues to be relevant even as the data
distribution or access path of the corresponding statement change.
However, over a long period of time, its content can become outdated and
would have to be regenerated. This can be done by running Automatic
SQL Tuning again on the same statement to regenerate the SQL Profile.

SQL Profiles apply to the following statement types:

SELECT statements
UPDATE statements
INSERT statements (only with a SELECT clause)
DELETE statements
CREATE TABLE statements (only with the AS SELECT clause)
MERGE statements (the update or insert operations)

Managing SQL Profiles
=====================

While SQL Profiles are usually handled by Oracle Enterprise Manager as part
of the Automatic SQL Tuning process, SQL Profiles can be managed through the
DBMS_SQLTUNE package. To use the SQL Profiles APIs, you need the
CREATE ANY SQL_PROFILE, DROP ANY SQL_PROFILE, and ALTER ANY SQL_PROFILE
system privileges.

Accepting a SQL Profile
=======================

You can use the DBMS_SQLTUNE.ACCEPT_SQL_PROFILE procedure to accept a
SQL Profile recommended by the SQL Tuning Advisor. This creates and stores a
SQL Profile in the database. For example:

DECLARE
my_sqlprofile_name VARCHAR2(30);
BEGIN
my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (
task_name => 'my_sql_tuning_task',
name => 'my_sql_profile');
END;


Where my_sql_tuning_task is the name of the SQL tuning task.
You can view information about a SQL Profile in the DBA_SQL_PROFILES view.

Altering a SQL Profile
======================

You can alter the STATUS, NAME, DESCRIPTION, and CATEGORY attributes of
an existing SQL Profile with the ALTER_SQL_PROFILE procedure. For example:

BEGIN
DBMS_SQLTUNE.ALTER_SQL_PROFILE(
name => 'my_sql_profile',
attribute_name => 'STATUS',
value => 'DISABLED');
END;
/


In this example, my_sql_profile is the name of the SQL Profile that you want to alter.
The status attribute is changed to disabled which means the SQL Profile is
not used during SQL compilation.

Dropping a SQL Profile
======================
You can drop a SQL Profile with the DROP_SQL_PROFILE procedure. For example:

BEGIN
DBMS_SQLTUNE.DROP_SQL_PROFILE(name => 'my_sql_profile');
END;
/


In this example, my_sql_profile is the name of the SQL Profile you want to drop.
You can also specify whether to ignore errors raised if the name does not exist.
For this example, the default value of FALSE is accepted.

Example
=======

SESSION 1 -- SCOTT
=========

SQL> create table test (n number );

Table created.

declare
begin
for i in 1 .. 10000
loop
insert into test values(i);
commit;
end loop;
end;

PL/SQL procedure successfully completed.


create index test_idx on test(n);

Index created.


analyze table test estimate statistics (OR use dbms_stats)

Table analyzed.


select /*+ no_index(test test_idx) */ * from test where n=1

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=1 Bytes=13)
1 0 TABLE ACCESS (FULL) OF 'TEST' (TABLE) (Cost=6 Card=1 Bytes
=13)



SESSION 2 -- SYS
=========

1 DECLARE
2 my_task_name VARCHAR2(30);
3 my_sqltext CLOB;
4 BEGIN
5 my_sqltext := 'select /*+ no_index(test test_idx) */ * from test where n=
1';
6 my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
7 sql_text => my_sqltext,
8 user_name => 'SCOTT',
9 scope => 'COMPREHENSIVE',
10 time_limit => 60,
11 task_name => 'my_sql_tuning_task_2',
12 description => 'Task to tune a query on a specified table');
13* END;
14 /

PL/SQL procedure successfully completed.


1 BEGIN
2 DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task_2');
3* end;
SQL> /

PL/SQL procedure successfully completed.


1 SET LONG 1000
2 SET LONGCHUNKSIZE 1000
3 SET LINESIZE 100
4* SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task_2') from DUAL;



DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK_2')
--------------------------------------------------------------------------------
--------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : my_sql_tuning_task_2
Scope : COMPREHENSIVE
Time Limit(seconds): 60
Completion Status : COMPLETED
Started at : 05/04/2004 17:36:05
Completed at : 05/04/2004 17:36:05

-------------------------------------------------------------------------------
SQL ID : d4wgpc5g0s0vu

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK_2')
--------------------------------------------------------------------------------
--------------------
SQL Text: select /*+ no_index(test test_idx) */ * from test where n=1

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially better execution plan was found for this statement.

Recommendation (estimated benefit: 83.84%)

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK_2')
--------------------------------------------------------------------------------
--------------------
------------------------------------------
Consider accepting the recommended


1 DECLARE
2 my_sqlprofile_name VARCHAR2(30);
3 BEGIN
4 my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (
5 task_name => 'my_sql_tuning_task_2',
6 name => 'my_sql_profile');
7* END;
8 /

PL/SQL procedure successfully completed.


SQL> select to_char(sql_text) from dba_sql_profiles;

TO_CHAR(SQL_TEXT)
------------------------------------------------------------------------
select /*+ no_index(test test_idx) */ * from test where n=1


SESSION 1 --- SCOTT


SQL> select /*+ no_index(test test_idx) */ * from test where n=1;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=13)
1 0 INDEX (RANGE SCAN) OF 'TEST_IDX' (INDEX) (Cost=1 Card=1 By
tes=13

Solaris 10 Kernel Patch 127127-11

Solaris 10 Kernel Patch 127127-11 and Certain OpenSolaris Builds May Cause Random Memory Corruption on UltraSPARC T2 Systems


In this Document
Description
Likelihood of Occurrence
Possible Symptoms
Workaround or Resolution
Patches
Modification History
References



--------------------------------------------------------------------------------



Applies to:
OpenSolaris Operating System - Version: All Versions and later [Release: All Releases and later]
Sun Software > Operating Systems > Solaris Operating System
Sun SPARC Sun OS
________________________________

SUNBUG 6837313

Date of Resolved Release: 13-Jul-2010
Description
An issue with the UltraSPARC T2 random number generator device driver n2rng(7d) as delivered in Solaris 10 Kernel Patch 127127-11 and certain OpenSolaris builds may cause random memory corruption on UltraSPARC T2 systems, resulting in hard to diagnose panics.

Likelihood of Occurrence
This issue can occur in the following releases:

SPARC Platform

•Solaris 10 with patch 127127-11 and without patch 140151-02
•OpenSolaris based upon builds snv_80 through snv_118
Notes

1. OpenSolaris distributions may include additional bug fixes above and beyond the build from which it was derived. The base build can be derived as follows:

$ uname -vsnv_1122. Solaris 8 and 9 are not impacted by this issue. Solaris 10 and OpenSolaris on the x86 platform are not impacted by this issue

3. This issue only affects the UltraSPARC T2 family of systems. Systems in the affected UltraSPARC T2 family are the following:

T5120/T5220, T5140/T5240, T5440, T6320, T6340, CP3260

4. This issue only occurs when security oriented applications use the n2rng(7d) driver to generate random numbers/data. Applications which utilize underlying network security protocols such as SSL(3) or IPsec(7P) indirectly use the n2rng driver.

Possible Symptoms
There are no predictable symptoms that would indicate the described issue has occurred.
Workaround or Resolution
This issue is addressed in the following releases:

SPARC Platform

•Solaris 10 with patch 140151-02 or later
•OpenSolaris based upon builds snv_119 or later
Patches
SUNPATCH 140151-02
Modification History
Date of Resolved release: 13-Jul-2010

References
SUNPATCH 140151-02
SUNBUG 6837313

Oracle10g RMAN Recovery Catalog Known Performance Issues

In this Document
Description
Likelihood of Occurrence
Possible Symptoms
Workaround or Resolution
Patches
Modification History
References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4

Description

The bugs listed below are ones associated with the RMAN Recovery Catalog affecting RMAN performance. To reduce the impact the catalog can have on backup and recovery operations, make sure all the workarounds of these bugs are implemented in the recover.bsq file.

NOTE: Before making any changes to recover.bsq you should make a backup copy of the original file.

Likelihood of Occurrence

This can happen with any environment using RMAN with the recovery catalog version up to 10.2.0.4.0.

Possible Symptoms

Known RMAN catalog issues causing severe performance impact on RMAN Backup & Recovery operations.

BUG 6451722 RSR TABLE IN RMAN CATALOG DATABASE NEVER GETS CLEANED

BUG 5219484 CATALOG RESYNCS ARE VERY SLOW - ROUT TABLE HAS 6 MILLION ROWS +

BUG 6476935 10.2.0.3 RDBMS SLOW RESYNC DUE TO MISSING INDEX ON RSR TABLE

BUG: 6034995 NEED NEW PURGING ALGORITHM FOR ROUT ROWS DURING RESYNC

BUG: 7595777 RMAN TAKES A LONG TIME BEFORE STARTING THE BACKUP

BUG 7173341 RMAN internal cleanup may be slow


Workaround or Resolution

SOLUTIONS

BUG 7173341 RMAN internal cleanup may be slow

Workaround
==========
Replace cleanupRSR query by following in recover.bsq and upgrade the
recovery catalog schema using the UPGRADE CATALOG rman command.

DELETE FROM rsr
WHERE rsr_end < nowTime-60
AND rsr.dbinc_key IN
(select dbinc_key from dbinc
where dbinc.db_key = this_db_key) ;

BUG 5219484 CATALOG RESYNCS ARE VERY SLOW - ROUT TABLE HAS 6 MILLION ROWS +

Workaround
==========

See Metalink Note 378234.1 Rman Catalog Resync Operation is Very slow at 10G

BUG 5219484 CATALOG RESYNCS ARE VERY SLOW - ROUT TABLE HAS 6 MILLION ROWS +

Workaround
==========

Reducing the number of days from 60 to 7 for cleaning up ROUT
rows will also help. The ROUT table contains the history of RMAN
output for previous RMAN sessions (most of the time, this is due to a
scheduled backup output).

PROCEDURE cleanupROUT IS
start_time date;
high_stamp number;
high_session_key number;
BEGIN
IF (this_db_key IS NULL) THEN
raise_application_error(-20021, 'Database not set');
END IF;

start_time := SYSDATE;
high_stamp := date2stamp(start_time-7);

SELECT nvl(max(rsr_key), 0) INTO high_session_key
FROM rsr, dbinc
WHERE rsr.dbinc_key = dbinc.dbinc_key
AND dbinc.db_key = this_db_key
AND rsr.rsr_sstamp < high_stamp;

DELETE FROM rout
WHERE rout_skey <= high_session_key
AND this_db_key = rout.db_key;

deb('cleanupROUT - deleted ' || sql%rowcount || ' rows from rout table');
deb('cleanupROUT - took ' || ((sysdate - start_time) * 86400) || '
seconds');

END cleanupROUT;

<> 10.2.0.3 RDBMS SLOW RESYNC DUE TO MISSING INDEX ON RSR TABLE


Workaround
==========

SQL> create index rsr_i_stamp on rsr(rsr_sstamp, rsr_srecid);

<> NEED NEW PURGING ALGORITHM FOR ROUT ROWS DURING RESYNC

Details:
RMAN only purges rows from the ROUT table based on age which can lead to poor performance of SYNC operations as the table can grow excessively large.
Workaround:
Manually delete older rows from ROUT.


BUG: 7595777 RMAN TAKES A LONG TIME BEFORE STARTING THE BACKUP

Details: Standby resync too slow. This bug will be fixed in 10.2.0.5 / 11gR2
Workaround: Replace wasresynced function in recover.bsq with the following:
function wasresynced(until_stamp IN number
,high_stamp IN number) return number is
nodups number; -- number of duplicates
high number;
low number;
resyncstamp number;
begin
high := high_stamp;
low := until_stamp;
nodups := 0;
resyncstamp := 0;
deb('resync', 'wasresynced high_stamp=' || high_stamp ||
' high_date=' || stamp2date(high_stamp), dbtype);
.
for duprec in duprec_c(low, high) loop
if (dbms_rcvcat.isDuplicateRecord(recid => duprec.recid
,stamp => duprec.stamp
,type => duprec.type)) then
if (resyncstamp = 0) then
resyncstamp := duprec.stamp;
end if;
.
nodups := nodups + 1;
if (nodups >= maxdups) then
deb('resync', 'wasresynced resyncstamp=' || resyncstamp ||
' resyncdate=' || stamp2date(resyncstamp), dbtype);
return resyncstamp;
end if;
else -- couldn't find 16 consecutive duplicate records.
deb('resync', 'wasresynced could not find record recid=' ||
duprec.recid || ' stamp=' || duprec.stamp || ' type=' ||
duprec.type || ' maxdups=' || nodups, dbtype);
return 0;
end if;
end loop;

-- Timestamp range not enough to satisfy the number of duplicates.
-- Retry using a higher timestamp
deb('resync', 'timestamp range not enough - nodups=' || nodups,
dbtype);
return -1;
end;



Patches

You can review the $ORACLE_HOME/rdbms/admin/recover.bsq to verify you have these fixes. All the fixes except for bug 7595777 are in the generic 10.2.0.4.0 code line. Anything previous to this version requires all 4 bug fixes to avoid performance impact from the catalog connection during backup and recovery operations.

Modification History

01-OCT-2007 Document creation
18-Oct-2007 Alert published
22-Oct-2007 publicaton date added to Modificaiton History
11-DEC-2007 Removed bug# incorrectly added

References

BUG:6451722 - RSR TABLE IN RMAN CATALOG DATABASE NEVER GETS CLEANED
BUG:6476935 - SLOW RESYNC DUE TO MISSING INDEX ON RSR TABLE
BUG:7173341 - CLEANUPRSR IS TAKING VERY LONG TIME CAUSING SLOW RESYNC
BUG:7595777 - RMAN TAKES A LONG TIME BEFORE STARTING THE BACKUP
NOTE:247611.1 - Known RMAN Performance Problems
NOTE:363409.1 - Known RMAN issues in Oracle10g
NOTE:378234.1 - Rman Catalog Resync Operation is Very slow at 10G
NOTE:413098.1 - Extremely Poor RMAN Backup Performance to NFS After Upgrade to 10.2

Support Status and Alerts for Oracle 10g Release 2 (10.2.0.X)

Support Status and Alerts for Oracle 10g Release 2 (10.2.0.X)

This note contains a list of the main issues affecting RDBMS release/s 10.2.0.X. The first part of this note gives generic information - platform specific issues and differences can be found in the Platform Specific Addenda section at the end of the note. For other Server versions see Note:161818.1.
Note: Alerts are now only listed in the Known Issues notes for each patch set level.

Base Release Information

Information in this section is relevant for the 10.2.0.1 Base Release and for any Patch Set which may be applied on top of 10.2.0.1.
  Support Status     For details of 10.2.0 desupport dates see the Lifetime Support Policy     Frequently Asked Questions about 10.2 Desupport        Note:1130327.1    Documentation     Online Documentation    Upgrade Information / Alerts     Rule Based Optimizer (RBO) does not exist in 10g       Note:189702.1     Interoperability support between Oracle Releases       Note:207303.1    Notification of Changes in Future Releases     None at present. 

Patch Sets

This section gives a summary of the patch sets available for Oracle 10g Release 2.
Note that the BASE release of Oracle 10g Release 2 is 10.2.0.1 .
  • Release Comments
    10.2.0.5 Current / Terminal Patch Set
     List of fixes included in 10.2.0.5              Note:1088172.1  Known issues and alerts affecting 10.2.0.5      Note:1087991.1 
    10.2.0.4 Third Patch Set
     List of fixes included in 10.2.0.4              Note:401436.1  Known issues and alerts affecting 10.2.0.4      Note:555579.1 
    10.2.0.3 Second Patch Set
     List of fixes included in 10.2.0.3              Note:391116.1  Known issues and alerts affecting 10.2.0.3      Note:401435.1 
    10.2.0.2 First Patch Set
     List of fixes included in 10.2.0.2              Note:358749.1  Known issues and alerts affecting 10.2.0.2      Note:359415.1 
    10.2.0.1 Oracle 10g Release 2 Base Release.
     Known issues and alerts affecting 10.2.0.1      Note:316901.1 

Platform Specific Addenda

The table below lists the releases available for each platform.
The "Oracle Enterprise Manager Grid Control" patch sets are listed below as "version-Grid" and should only be installed in the Management Agent, Management Repository and Management Server homes.

In the table clicking on the hyperlinks beginning with "ARU" should take you directly to the relevant patch download page, whilst clicking on the "ReadMe" link should take you directly to the latest version of the ReadMe for the Patch Set.
Fujitsu Siemens BS2000/OSD
10.2.0.4 Bug 6810189 ARU:10796592 ReadMe
Fujitsu Siemens BS2000/OSD (SX Series)
10.2.0.4 Bug 6810189 ARU:10853396 ReadMe
HP OpenVMS (Alpha)
10.2.0.4 Bug 6810189 ARU:10819064 ReadMe
10.2.0.2 Base Release
HP OpenVMS (Itanium)
10.2.0.4 Bug 6810189 ARU:10819065 ReadMe
10.2.0.2 Base Release
HP Tru64 Unix
10.2.0.5-Grid Bug 3731593 ARU:11475809 ReadMe
10.2.0.4 Bug 6810189 ARU:11017589 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:8950900 ReadMe
10.2.0.3 Bug 5337014 ARU:9558401 ReadMe
10.2.0.2 Base Release
HPUX-11 (64-bit Oracle)
10.2.0.5-Grid Bug 3731593 ARU:11236583 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10228833 ReadMe
10.2.0.4 Bug 6810189 ARU:10237651 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9014347 ReadMe
10.2.0.3 Bug 5337014 ARU:8979591 Latest ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8658709 ReadMe
10.2.0.2 Bug 4547817 ARU:8263300 Latest ReadMe
10.2.0.1 » Available
HPUX Itanium 64bit
10.2.0.5-Grid Bug 3731593 ARU:11255858 ReadMe
10.2.0.5 Bug 8202632 ARU:12652895 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10070471 ReadMe
10.2.0.4 Bug 6810189 ARU:10162526 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9039441 ReadMe
10.2.0.3 Bug 5337014 ARU:9165881 ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8648550 ReadMe
10.2.0.2 Bug 4547817 ARU:8419120 Latest ReadMe
10.2.0.1 » Available
IBM AIX Based Systems (64-bit)
10.2.0.5-Grid Bug 3731593 ARU:11274626 ReadMe
10.2.0.5 Bug 8202632 ARU:12653791 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10168035 ReadMe
10.2.0.4 Bug 6810189 ARU:10195554 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9014527 ReadMe
10.2.0.3 Bug 5337014 ARU:8979594 Latest ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8652046 ReadMe
10.2.0.2 Bug 4547817 ARU:8263208 Latest ReadMe
10.2.0.1 » Available
IBM Linux for S/390
10.2.0.5-Grid Bug 3731593 ARU:11409259 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10526965 ReadMe
IBM zSeries Based Linux
10.2.0.4 Bug 6810189 ARU:10827440 ReadMe
10.2.0.3 Bug 5337014 ARU:9275990 Latest ReadMe
IBM Linux on POWER
10.2.0.5-Grid Bug 3731593 ARU:11351098 ReadMe
10.2.0.4 Bug 6810189 ARU:10891985 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9163225 ReadMe
10.2.0.3 Bug 5337014 ARU:9043862 ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8708142 ReadMe
10.2.0.2 Bug 4547817 ARU:8329161 Latest ReadMe
10.2.0.1 » Available
IBM z/OS
10.2.0.3 Bug 5337014 ARU:12602660 Latest ReadMe
Linux
10.2.0.5-Grid Bug 3731593 ARU:11024942 ReadMe
10.2.0.5 Bug 8202632 ARU:12543981 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:9585511 ReadMe
10.2.0.4 Bug 6810189 ARU:9961232 Latest ReadMe
10.2.0.3-Grid Bug 3731593 ARU:8933198 ReadMe
10.2.0.3 Bug 5337014 ARU:8861857 Latest ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8454839 ReadMe
10.2.0.2 Bug 4547817 ARU:8243539 Latest ReadMe
10.2.0.1 » Available
Linux IA64
10.2.0.5-Grid Bug 3731593 ARU:11128097 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10248746 ReadMe
10.2.0.4 Bug 6810189 ARU:10517400 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9138027 ReadMe
10.2.0.3 Bug 5337014 ARU:8908727 Latest ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8698827 ReadMe
10.2.0.2 Bug 4547817 ARU:8460118 Latest ReadMe
10.2.0.1 » Available
Linux x86-64 (AMD)
10.2.0.5-Grid Bug 3731593 ARU:11152361 ReadMe
10.2.0.5 Bug 8202632 ARU:12540824 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10111960 ReadMe
10.2.0.4 Bug 6810189 ARU:10029612 Latest ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9005938 ReadMe
10.2.0.3 Bug 5337014 ARU:8911725 Latest ReadMe
10.2.0.2-Grid Bug 3731593 ARU:8690977 ReadMe
10.2.0.2 Bug 4547817 ARU:8251421 Latest ReadMe
10.2.0.1 » Available
Oracle Solaris on x86 Intel
10.2.0.X » See 3rd Party Desupport Advisory Note:423053.1
10.2.0.5-Grid Bug 3731593 ARU:11388377 ReadMe
10.2.0.4 Bug 6810189 ARU:10717946 ReadMe
10.2.0.3 Not being released. Next Patch Set is 10.2.0.4
10.2.0.2 Base Release
Oracle Solaris on x86-64
10.2.0.5-Grid Bug 3731593 ARU:11388382 ReadMe
10.2.0.5 Bug 8202632 ARU:12604988 ReadMe
10.2.0.4 Bug 6810189 ARU:10702582 ReadMe
10.2.0.3 Bug 5337014 ARU:9425070 ReadMe
10.2.0.2 Bug 4547817 ARU:8497776 Latest ReadMe
Oracle Solaris on Sparc 32-bit
10.2.0.2-Grid Bug 3731593 ARU:8798092 ReadMe
Oracle Solaris on Sparc 64-bit
10.2.0.5-Grid Bug 3731593 ARU:11178544 ReadMe
10.2.0.5 Bug 8202632 ARU:12605582 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10031466 ReadMe
10.2.0.4 Bug 6810189 ARU:10162524 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:8976556 ReadMe
10.2.0.3 Bug 5337014 ARU:8908791 Latest ReadMe
10.2.0.2 Bug 4547817 ARU:8263254 Latest ReadMe Mandatory patch required - see Note:5117016.8
10.2.0.1 » Available
Windows NT / Windows 2000 / Windows XP
10.X » Server Patches on Microsoft Platforms Note:161549.1
10.2.0.5-Grid Bug 3731593 ARU:11025082 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:9585704 ReadMe
10.2.0.4 Bug 6810189 ARU:10032632 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:8933258 ReadMe
10.2.0.3 Bug 5337014 ARU:8866412 Latest ReadMe
10.2.0.2 Bug 4547817 ARU:8243586 Latest ReadMe
10.2.0.1 » Available
Windows 64bit (Itanium)
10.X » Server Patches on Microsoft Platforms Note:161549.1
10.2.0.5-Grid Bug 3731593 ARU:11475848 ReadMe
10.2.0.4 Bug 6810189 ARU:10202370 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9391892 ReadMe
10.2.0.3 Bug 5337014 ARU:8908369 Latest ReadMe
10.2.0.2 Bug 4547817 ARU:8262947 Latest ReadMe
10.2.0.1 » Available
Windows 64bit (AMD64 / EM64T)
10.X » Server Patches on Microsoft Platforms Note:161549.1
10.2.0.5-Grid Bug 3731593 ARU:11299752 ReadMe
10.2.0.4-Grid Bug 3731593 ARU:10353547 ReadMe
10.2.0.4 Bug 6810189 ARU:10205630 ReadMe
10.2.0.3-Grid Bug 3731593 ARU:9353830 ReadMe
10.2.0.3 Bug 5337014 ARU:8910975 Latest ReadMe
10.2.0.2 Bug 4547817 ARU:8263170 Latest ReadMe
10.2.0.1 » Available

ALERT: Sparse Files and Oracle

Introduction:
Oracle does not recommend nor support the use of 'sparse' files for an Oracle database, except in the very special case of TEMPFILE database files which can be created in Oracle8i onwards. With an increasing number of third party data management tools becoming available it is important to ascertain if any utilities to be used will affect your database files. This is most important with respect to backup and recovery of files. Oracle have seen situations in the past where recovery of backed up data produced 'sparse' database files which then caused serious problems.

What is a Sparse file ?

An Oracle7 data file is filled with blocks of '0's when it is created.
Ie: Space is pre-allocated and filled with zeroes.
An Oracle8 data file is filled with formatted blocks when it is
created, but the middle of these blocks contains all '0's which may
span one or more O/S blocks.
Note: An Oracle8i TEMPFILE is NOT explicitly filled with '0's and
can be created as a sparse file on some platforms. This is
correct behaviour for this form of file.

A sparse file is a file where any 'empty' O/S blocks do not actually
take up a full block but have a marker to show the block is empty.
The operating system then finds a free block for use when the block
is populated with data.

On most systems creating a sparse file is simple. The code below is
an example of how a sparse file could be created:

#include
#include
#define FILESIZE 1048576L /* 1Mb file (Plus EOF tag) */
main()
{
int fd;

if ((fd=open( "sparse_file", O_CREAT|O_RDWR, 0666 ))>=0) {
lseek( fd, FILESIZE, 0 );
write( fd, "EOF", 3 );
close( fd );
}
}

Ie: You just seek past the End-Of-File to a new location and blocks
in between will be 'sparse' blocks in that they take no real space.


So how can an Oracle file become sparse ?

Backup and restore utilities need to be as fast as possible. When data is backed up it is quite common for some form of compression to occur, especially where there are completely empty blocks as in an Oracle datafile. If on restoring files the 'restore' program does not actually write out empty blocks but just seeks past them it will produce a sparse file (where the Operating System supports sparse files). You then get a sparse file as part of the database.

Why is this a problem ? Oracle has preallocated space for its datafiles. If there are any disk space problems these will become apparent when the datafile is CREATED. They should NEVER occur once the file has been created as Oracle has already been allocated all the disk space it intends to use. Hence errors when filling in new blocks in the datafile are classed as internal errors and can force a datafile offline, or even stop the database. Sparse files can also lead to low level fragmentation. If a datafile is created on a new Unix partition it should obtain contiguous space on that disk. If the space is allocated during usage (as in a sparse file) there is likely to be much greater fragmentation of the datafile on the underlying disk. Although sparse files should behave the same as normal files it is possible for there to be small obscure differences in the behaviour of system calls made by Oracle which can cause unexpected behaviour.

How do I detect if a file is sparse ? The only way to detect a sparse file is to compare its size (in disk blocks), with the free disk space on a disk. This is easiest to check by removing the file and noting the increase in free disk space. **Note: Always make a safe copy of a file before removing it. For database files the database should be in a shutdown state.

Eg: On Sequent PTX:

Check disk space:
>>> df -St .
. (/dev/dsk/zd2s2 ): 517860 blocks 134338 i-nodes
total: 552420 blocks 134400 i-nodes

Check file size:
>>> ls -l sparse_file
-rw-r--r-- 1 usupport sys 1048579 Sep 28 09:01 sparse_file

File size in blocks:
>>> du -s sparse_file
2080 sparse_file

Copy file safely away then remove it:
>>> rm sparse_file
>>> df -St .
. (/dev/dsk/zd2s2 ): 517892 blocks 134339 i-nodes
total: 552420 blocks 134400 i-nodes


We have only freed up 517892 - 517860 = 32 blocks for a 2080 block
file !! Hence this file was a sparse file.


For a non sparse file we have:

>>> df -St .
. (/dev/dsk/zd2s2 ): 513722 blocks 134336 i-nodes
total: 552420 blocks 134400 i-nodes

>>> ls -l non_sparse
-rw-r--r-- 1 usupport sys 1048579 Sep 28 09:14 non_sparse

>>> du -s non_sparse
2080 non_sparse

>>> rm non_sparse
>>> df -St .
. (/dev/dsk/zd2s2 ): 515802 blocks 134337 i-nodes
total: 552420 blocks 134400 i-nodes

We have freed up 515802 - 513722 = 2080 blocks, as expected.


It is a good idea just to use 'du -s' on a disk to get its disk usage
and see if this compares to the disk free (df) results. If it does
not (Ie: du reports much more space used than df shows) then analyse
the disk further as above.

It is also possible on most versions of Unix to use the command
"ls -ls" to show the actual number of blocks allocated to a file.
Eg: ls -ls sparse.dbf
144 -rw-r----- 1 oracle dba 104859648 Feb 3 17:54 sparse.dbf
^^^-- 144 OS blocks occupied (=77K)

How do I 'de-sparse' a sparse file ?

Oracle datafiles MUST be 'de-sparsed' to be used. The only effective way to do this is file by file using a known 'safe' command, Eg: tar. For example to 'de-sparse' a file, 'tar' it onto tape (or a tar disk file) and then fetch it back. You should check the disk free space before and after to ensure the file has expanded. 'tar' is safe for this on most machines but you should check the 'df' results to confirm this.

Eg: To desparse our sparse file earlier:

df -St . -> 517850 blocks free

tar cvf /dev/xxx sparse_file
rm sparse_file
tar xvf /dev/xxx sparse_file

df -St . -> 515802 blocks free

Ensure there is no other disk activity on the current disk that
would confuse the 'df' readings.
Safe Backups

The following Unix commands are usually safe for backup and
recovery:

tar
cpio
dd

Any other commands or third party utilities should be checked
individually for correct behaviour.

Please note that the tar and cpio commands for OSF/1 3.0 can
produce sparce files. See note:22151.1 for further information.




References:
Document 251336.1 What Are Sparse Files