Typically, for small tables, Oracle recommends accessing data through a full table scan, while for large tables, indexes should be used to accelerate data queries. However, if the query requires returning most or all of the data in the table, a full table scan might still be the best option.
From the V$SYSSTAT view, we can query system statistics related to full table scans:
```sql
SQL> col name for a30
SQL> select name,value from v$sysstat
2 where name in ("table scans (short tables)","table scans (long tables)");
NAME VALUE
------------------------------ ----------
table scans (short tables) 828
table scans (long tables) 101
```
Here, "table scans (short tables)" refers to the number of full table scans performed on small tables, and "table scans (long tables)" refers to the number of full table scans performed on large tables.
We can also find this information in the Statspack report:
**Instance Activity Stats for DB: CELLSTAR Instance: ora8i Snaps: 20**
Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
...
table scan blocks gotten 38,228,349 37.0 26.9
table scan rows gotten 546,452,583 528.9 383.8
table scans (direct read) 5,784 0.0 0.0
table scans (long tables) 5,990 0.0 0.0
table scans (rowid ranges) 5,850 0.0 0.0
table scans (short tables) 1,185,275 1.2 0.8
Generally, if a database has too many "table scans (long tables)," the "db file scattered read" wait event may also be very significant. The top 5 wait events from the same report are as follows:
**Top 5 Wait Events**
Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
log file parallel write 1,436,993 1,102,188 10.80
log buffer space 16,698 873,203 8.56
log file sync 1,413,374 654,587 6.42
control file parallel write 329,777 510,078 5.00
db file scattered read 425,578 132,537 1.30
Within the database, much of the information and phenomena are closely related. As long as we deepen our understanding of the database, we will be able to optimize and diagnose database issues with ease.
Oracle defines the boundary between large and small tables using an internal parameter called "_small_table_threshold." By default, this parameter equals 2% of the Buffer quantity. If the size of the table is less than the value defined by this parameter, Oracle considers it a small table; otherwise, it is considered a large table.
Let's take a look at the situation in Oracle9iR2:
```sql
SQL> @@GetParDescrb.sql
Enter value for par: small
old 6: AND x.ksppinm LIKE "%&par%"
new 6: AND x.ksppinm LIKE "%small%"
NAME VALUE DESCRIB
------------------------------ -------------------- ------------------------------------------------------------
_small_table_threshold 200 threshold level of table size for direct reads
```
In the above database, 200 is approximately 2% of the Buffer quantity:
```sql
SQL> show parameter db_cache_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_size big integer 83886080
SQL> select (83886080/8192)*2/100 from dual;
(83886080/8192)*2/100
---------------------
204.8
```
The distinction between large and small tables (Long/Short) is made because full table scans can cause fluctuations in the Buffer Cache. By default, full table scans of large tables are placed at the end of the LRU to age out quickly and reduce Buffer usage. Starting with Oracle8i, Oracle's multi-buffer pool management technology (Default/Keep/Recycle pools) gives us another option. For tables of different sizes and usage frequencies, we can specify their storage Buffer from the time of table creation to make memory usage more efficient.
**Article Source:** 【Eden Network】http://www.edenw.com/tech/devdeloper/database/2010-07-15/4710.html