Current Position:Home > Statpack analyzing of 9i database.

Statpack analyzing of 9i database.

Update:10-11Source: network consolidation
Advertisement
hi Expertise
Please help me for sorting the statpack report of my production DB in 9i. Also advise some recommendation after analyzing my statpack view.
Elapsed:     3.75 (min)     225 (sec)
DB Time:     7.84 (min)     470.65 (sec)
Cache:     10,016 MB     
Block Size:     8,192 bytes     
Transactions:     2.01 per second     
Performance Summary
Physical Reads:     15,666/sec          MB per second:     122.39 MB/sec     
Physical Writes:     22/sec          MB per second:     0.17 MB/sec     
Single-block Reads:     1,412.69/sec          Avg wait:     0.03 ms     
Multi-block Reads:     1,916.26/sec          Avg wait:     0.05 ms     
Tablespace Reads:     3,346/sec          Writes:     22/sec     
Top 5 Events
Event     Percentage of Total Timed Events
CPU time     79.89%
PX Deq: Execute Reply     6.38%
db file scattered read     4.32%
SQL*Net more data from dblink     4.29%
db file sequential read     2.00%
Tablespace I/O Stats
Tablespace     Read/s     Av Rd(ms)     Blks/Rd     Writes/s     Read%     % Total IO
TS_CCPS     3,117      0     2.5      0      100%     92.5%
TS_OTHERS     204      0.2     26.2      1      99%     6.09%
TS_AC_POSTED03     19      1.9     127      2      89%     0.63%
Load Profile
Logical reads:     42,976/s          Parses:     39.41/s     
Physical reads:     15,666/s          Hard parses:     5.43/s     
Physical writes:     22/s          Transactions:     2.01/s     
Rollback per transaction:     0%          Buffer Nowait:     100%     
4 Recommendations:
Your database has relatively high logical I/O at 42,976 reads per second. Logical Reads includes data block reads from both memory and disk. High LIO is sometimes associated with high CPU activity. CPU bottlenecks occur when the CPU run queue exceeds the number of CPUs on the database server, and this can be seen by looking at the "r" column in the vmstat UNIX/Linux utility or within the Windows performance manager. Consider tuning your application to reduce unnecessary data buffer touches (SQL Tuning or PL/SQL bulking), using faster CPUs or adding more CPUs to your system.
You are performing more than 15,666 disk reads per second. High disk latency can be caused by too-few physical disk spindles. Compare your read times across multiple datafiles to see which datafiles are slower than others. Disk read times may be improved if contention is reduced on the datafile, even though read times may be high due to the file residing on a slow disk. You should identify whether the SQL accessing the file can be tuned, as well as the underlying characteristics of the hardware devices.
Check your average disk read speed later in this report and ensure that it is under 7ms. Assuming that the SQL is optimized, the only remaining solutions are the addition of RAM for the data buffers or a switch to solid state disks. Give careful consideration these tablespaces with high read I/O: TS_CCPS, TS_OTHERS, TS_AC_POSTED03, TS_RATING, TS_GP.
You have more than 1,222 unique SQL statements entering your shared pool, with the resulting overhead of continuous RAM allocation and freeing within the shared pool. A hard parse is expensive because each incoming SQL statement must be re-loaded into the shared pool; with the associated overhead involved in shared pool RAM allocation and memory management. Once loaded, the SQL must then be completely re-checked for syntax & semantics and an executable generated. Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant SQL is paged out) or when you have non-reusable SQL statements without host variables. See the cursor_sharing parameter for an easy way to make SQL reentrant and remember that you should always use host variables in you SQL so that they can be reentrant.
Instance Efficiency
Buffer Hit:     69.13%          In-memory Sort:     100%     
Library Hit:     96.4%          Latch Hit:     99.99%     
Memory Usage:     95.04%          Memory for SQL:     64.19%     
2 Recommendations:
Your Buffer Hit ratio is 69.13%. The buffer hit ratio measures the probability that a data block will be in the buffer cache upon a re-read of the data block. If your database has a large number of frequently referenced table rows (a large working set), then investigate increasing your db_cache_size. For specific recommendations, see the output from the data buffer cache advisory utility (using the v$db_cache_advice utility). Also, a low buffer hit ratio is normal for applications that do not frequently re-read the same data blocks. Moving to SSD will alleviate the need for a large data buffer cache.
Your shared pool maybe filled with non-reusable SQL with 95.04% memory usage. The Oracle shared poolcontains Oracle´s library cache, which is responsible for collecting, parsing, interpreting, and executing all of the SQL statements that go against the Oracle database. You can check the dba_hist_librarycache table in Oracle10g to see your historical library cache RAM usage.
SQL Statistics
Click here to see all SQL data
Wait Events
Event     Waits     Wait Time (s)     Avg Wait (ms)     Waits/txn
PX Deq: Execute Reply     137     30     219     0.3
db file scattered read     431,159     20     0     951.8
SQL*Net more data from dblin     51,140     20     0     112.9
db file sequential read     317,856     9     0     701.7
io done     6,842     5     1     15.1
db file parallel read     21     1     52     0.0
local write wait     250     1     4     0.6
db file parallel write     825     1     1     1.8
SQL*Net message from dblink     208     1     3     0.5
log file parallel write     2,854     1     0     6.3
0 Recommendations:
Instance Activity Stats
Statistic     Total     per Second     per Trans
SQL*Net roundtrips to/from client     87,889     390.6     194.0
consistent gets     10,141,287     45,072.4     22,387.0
consistent gets - examination     884,579     3,931.5     1,952.7
db block changes     100,342     446.0     221.5
execute count     18,913     84.1     41.8
parse count (hard)     1,222     5.4     2.7
parse count (total)     8,868     39.4     19.6
physical reads     3,525,003     15,666.7     7,781.5
physical reads direct     539,879     2,399.5     1,191.8
physical writes     5,132     22.8     11.3
physical writes direct     29     0.1     0.1
redo writes     1,598     7.1     3.5
session cursor cache hits     4,378     19.5     9.7
sorts (disk)     0     0.0     0.0
sorts (memory)     4,988     22.2     11.0
table fetch continued row     310     1.4     0.7
table scans (long tables)     82     0.4     0.2
table scans (short tables)     18,369     81.6     40.6
workarea executions - onepass     0     0.0     0.0
5 Recommendations:
You have high network activity with 390.6 SQL*Net roundtrips to/from client per second, which is a high amount of traffic. Review your application to reduce the number of calls to Oracle by encapsulating data requests into larger pieces (i.e. make a single SQL request to populate all online screen items). In addition, check your application to see if it might benefit from bulk collection by using PL/SQL "forall" or "bulk collect" operators.
You have 3,931.5 consistent gets examination per second. "Consistent gets - examination" is different than regular consistent gets. It is used to read undo blocks for consistent read purposes, but also for the first part of an index read and hash cluster I/O. To reduce logical I/O, you may consider moving your indexes to a large blocksize tablespace. Because index splitting and spawning are controlled at the block level, a larger blocksize will result in a flatter index tree structure.
You have high update activity with 446.0 db block changes per second. The DB block changes are a rough indication of total database work. This statistic indicates (on a per-transaction level) the rate at which buffers are being dirtied and you may want to optimize your database writer (DBWR) process. You can determine which sessions and SQL statements have the highest db block changes by querying the v$session and v$sessatst views.
You have high disk reads with 15,666.7 per second. Reduce disk reads by increasing your data buffer size or speed up your disk read speed by moving to SSD storage. You can monitor your physical disk reads by hour of the day using AWR to see when the database has the highest disk activity.
You have high small table full-table scans, at 81.6 per second. Verify that your KEEP pool is sized properly to cache frequently referenced tables and indexes. Moving frequently-referenced tables and indexes to SSD or theWriteAccelerator will significantly increase the speed of small-table full-table scans.
Buffer Pool Advisory
Current:     3,599,469,418 disk reads     
Optimized:     1,207,668,233 disk reads     
Improvement:     66.45% fewer     
The Oracle buffer cache advisory utility indicates 3,599,469,418 disk reads during the sample interval. Oracle estimates that doubling the data buffer size (by increasing db_cache_size) will reduce disk reads to 1,207,668,233, a 66.45% decrease.
Init.ora Parameters     
Parameter     Value     
cursor_sharing     similar     
db_block_size     8,192     
db_cache_size     8GB     
db_file_multiblock_read_count     32     
db_keep_cache_size     1GB     
hash_join_enabled     true     
log_archive_start     true     
optimizer_index_caching     90     
optimizer_index_cost_adj     25     
parallel_automatic_tuning     false     
pga_aggregate_target     2GB     
query_rewrite_enabled     true     
session_cached_cursors     300     
shared_pool_size     2.5GB     
optimizercost_model     choose     
1 Recommendations:
You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.

The Best Answer

Advertisement
Systemwide Tuning using STATSPACK Reports [ID 228913.1] and http://jonathanlewis.wordpress.com/statspack-examples/ should be useful.
  • Statpack analyzing of 9i database. Update:10-11

    hi Expertise Please help me for sorting the statpack report of my production DB in 9i. Also advise some recommendation after analyzing my statpack view. Elapsed:     3.75 (min)     225 (sec) DB Time:     7.84 (min)     470.65 (sec) Cache:     10,016

  • Hyperion Analyzer with Relational Database Update:11-30

    Can Analyzer be used on a relational database or only Essbase?Thanks, CathyYes..analyzer can be used with any relational databases.Read other 2 answers

  • Schema analyzing is taking too much time. Update:11-30

    Hi All, A schema analyze in our database that should take max 2hours.It has been running for last 18 hours. So, what i need to check for that. RanjanHandle:      788442 Status Level:      Newbie Registered:      Aug 14, 2010 Total Posts:      87 Tota

  • Mysql database Update:11-30

    Hi , I installed analyzer using mysql database, can any one telll me ..is there any open source GUI to do the backup's and restore of mysql tables. If anyone have this ..please share the link... Thank you!!!Hava read of :- http://www.devshed.com/c/a/

  • Unicode in Analyzer Update:11-30

    I created a unicode mode application in Essbase XTD Analytic Servers.When I tried to use Analyzer to query databases, I could not link database in unicode mode application.Is that true I can't use Analyzer for unicode database?Or, is there something

  • How to run analyze schema Update:11-30

    Hi Guys . Please how do you RUN analyze schema or database or table? I have ran gather all schema statistics and now my told to run analyze schema by my boss .. How do you do this please? Platform is 11.5.9 OS: solarais Thanks in advanceLook at the S

  • How to disable the timer job for "Database Statistics" Update:11-30

    I want to disable the database statistics timer job using the powershell, but I cant find the exact name of it in the list of timer jobs. tried Get-SpTimerjob to manually look for it Get-SPTimerJob SPDatabaseStatisticsJobDefinitionor and also looked

  • Error in DB analyzer logs Update:11-30

    HI, My SCM 5.0system is running on oracle 10g . I have checked LC & found below error in DB analyzer logs. W3  11523 primary key range accesses, selectivity 0.01%: 140299528 rows read, 12593 rows qualified       CON: PKeyRgSel < 0.3       VAL: 0.01

  • Open Source Log Analyzer Project Update:11-30

    Hi people, I have a question whether there is a open source project which analyze logs from database. I mean I have a table(Log table which is like syslog message format). I need to analyze this table with a web based project. So, Do you know any ope

  • How to findout the sharepoint job which responsible for database re indxing Update:11-30

    Hi In sharepoint 2010 i configured RBS storage for  Web application content database in our org form  has two web frontend servers, two application servers, and two index servers ,one database server so when users upload BLOBs to sharepoint library w