Skip to content

Oracle X$ Tables

Oracle X$ Tables published on 2 комментария к записи Oracle X$ Tables

original at yong321.freeshell.org

X$ Tables


Oracle X$ Tables

Updated to Oracle 12.1.0.2. The X$ tables not included here are too obvious, too obscure, or too uninteresting.

Table Name Guessed Acronym Comments
x$activeckpt active checkpoint Ckpt_type 2 for MR checkpoint (Ref), 3 for interval (Ref) or thread checkpoint (Ref), 7 for incremental checkpoint, 10 for object reuse/truncate checkpoint, 11 for object checkpoint (Ref).
x$bh buffer header This table is commonly used to find the object and the file# and block# of its header when there’s high cache buffers chains latch contention: select obj, dbarfil, dbablk from x$bh a, v$latch_children b where a.hladdr = b.addr for the said latch (whose sleeps you think are too high). You can also use this table to see if a specific buffer has too many clones: select dbarfil, dbablk, count(*) from x$bh group by dbarfil, dbablk having count(*) > 2. Note obj column matches dba_objects.data_object_id, not object_id. For performance reason, don’t merge dba_extents with the query of x$bh that has a group by, unless you use in-line view and no_merge hint (see J. Lewis Practical Oracle8i, p.215) The tch column, touch count, records how many times a particular buffer has been accessed. Its flag column is explained by J. Lewis (some unused columns are later used; e.g. bit 29 means plugged_from_foreign_db in 12c); explanation of state, mode and indx can be found in Anjo Kolk’s paper. Tim is time the buffer touch happened (Note 1). Lru_flag is about the buffer’s position on LRU lists (Ref and 136312.1); 2 moved_to_tail, 4 on_auxiliary_list (auxliary LRU), 8 hot_buffer (on hot end of main LRU), and numbers can be added e.g. 6=2+4.
x$ckptbuf checkpoint buffer (queue) Lists the buffers on the checkpoint queue. Immediately after a full checkpoint, the buffers with non-zero buf_ptr and buf_dbablk should go down.
x$dbgalertext debug alert extented One use is to find old alert.log text long after you recycled the physical file: select originating_timestamp, message_text from x$dbgalertext. The message_id and message_group columns are also interesting and are not available in alert.log.
x$dbglogext debug log extended 12c
x$dbgricx, x$dbgrifx, x$dbgrikx, x$dbgripx debug ? You can quickly summarize what kind of errors the database has had: select error_facility||’-‘||error_number, count(*) from x$dbgricx group by error_facility||’-‘||error_number order by 2, and optionally restrict to a certain time range. You can of course summarize on a more granular level, such as (e.g.) shared pool vs large pool on error_arg2 in case of ORA-4031. You can of course find records of these errors in (undocumented) v$diag_incident or v$diag_diagv_incident. In any case, you may find this easier than grep alert.log. For each incident, its session info is in x$dbgrikx.
x$dbkece debug kernel error, critical error Base table of undocumented v$diag_critical_error but includes facility dbge (Diagnostic Data Extractor or dde)
x$dbkefefc debug kernel error, fatal error flood control Rules for flood control on too many fatal errors.
x$dglparam data guard logical parameters Base table of dba_logstdby_parameters but includes invisible parameters.
x$diag_alert_ext diagnostics alert extended Base table of v$diag_alert_ext. Same as x$dbgalertext but has more lines, slower to query
x$diag_hm_run, x$diag_vhm_run diagnostics health monitor runs Base table of undocumented v$diag_(v)hm_run. Health monitor job records. Maybe complementary to v$hm_run?
x$diag_ips_configuration diagnostics incident packaging service configuration Base table of v$diag_ips_configuration. Some ADR IPS related config info. Like a few other v$diag* (or x$diag*) tables, some columns such as adr_home, name, can’t be exactly matched as if there’re trailing characters. CTAS to create a regular table against which you query, or use subquery factoring with /*+materialize*/ hint.
x$dnfs_meta dNFS metadata Some metadata related to dNFS, SGA memory, message timeout, ping timeout, etc.
x$dra_failure data recovery advisor failures DRA failure names and descriptions.
x$drm_history, x$drm_history_stats dynamic remastering history, stats History of RAC DRM and stats. Parent_key is object_id. If an object is remastered to another node (new_master) too frequently, consider partitioning the app sessions. In 12.1.0.2, there’s also x$drm_wait_stats.
x$jskjobq job scheduling ?, job queue Internal job queue. Job_oid is object_id in dba_objects. If you must query this table, exit the session as soon as you’re done with your work because your session after the query holds an exclusive JS lock, which will block CJQ process! Rollback or commit won’t release the lock.
x$k2gte,
x$k2gte2
kernel 2-phase commit, global transaction entry See Note:104420.1. Find sessions coming from or going to a remote database; in short, x$k2gte.k2gtdses matches v$session.saddr, .k2gtdxcb matches v$transaction.addr.

select /*+ ordered */
substr(s.ksusemnm,1,10)||’-‘|| substr(s.ksusepid,1,10) origin,
substr(g.k2gtitid_ora,1,35) gtxid,
substr(s.indx,1,4)||’.’|| substr(s.ksuseser,1,5) lsession,
s.ksuudlna username,
substr(decode(bitand(ksuseidl,11), 1,’ACTIVE’, 0, decode( bitand(ksuseflg,4096) , 0,’INACTIVE’,’CACHED’),
2,’SNIPED’, 3,’SNIPED’, ‘KILLED’),1,1) status,
e.kslednam waiting
from x$k2gte g, x$ktcxb t, x$ksuse s, x$ksled e
where g.k2gtdxcb=t.ktcxbxba
and g.k2gtdses=t.ktcxbses
and s.addr=g.k2gtdses
and e.indx=s.ksuseopc;

It’s Continue reading Oracle X$ Tables

how to get cpu load from awr

how to get cpu load from awr published on 1 комментарий к записи how to get cpu load from awr

hi, there is a script to find out cpu load from awr by one instance ( in example number two ) :

select start_time,round (100*"'LOAD'"/"'NUM_CPU_CORES'") AS LOAD from
(
select os.INSTANCE_NUMBER,stat_name,sum(os.value) as load,
min(to_date(to_char(s.begin_interval_time,'DD.MM.YYYY hh24.mi.ss'))) as START_TIME,max(to_date(to_char(s.end_interval_time,'DD.MM.YYYY hh24.mi.ss'))) end_time
from DBA_HIST_OSSTAT os join
DBA_HIST_SNAPSHOT s on s.snap_id= os.SNAP_ID
where os.stat_name in ('LOAD','NUM_CPU_CORES','INSTANCE_NUMBER')
group by os.stat_name, (trunc(to_date(to_char(s.begin_interval_time,'DD.MM.YYYY hh24.mi.ss')),'HH24')),os.INSTANCE_NUMBER   
)
  pivot( max(LOAD) for stat_name in ('LOAD','NUM_CPU_CORES') )
  where instance_number=2
order by instance_number,start_time asc ;

ps: usefull to make graphs linke this:
Screen Shot 2015-11-16 at 17.39.36

Screen Shot 2015-11-16 at 17.39.40

script to find locks in awr

script to find locks in awr published on 1 комментарий к записи script to find locks in awr
select  cast(min (ash.SAMPLE_TIME) as date) as start#
     ,round (24*60*(cast (max(ash.SAMPLE_TIME) as date) - cast(min (ash.SAMPLE_TIME) as date) ),2) as duration#
     ,ash.sql_id,ash.top_level_sql_id,ash.BLOCKING_SESSION as B_SID,ash.BLOCKING_SESSION_SERIAL# as b_serial#
     ,ash2.SQL_EXEC_ID b_sql_exec_id
     ,ash.event,do.object_name
     ,sum(decode(ash.session_state,'ON CPU',1,0))     "CPU"
     ,sum(decode(ash.session_state,'WAITING',1,0))    -         sum(decode(ash.session_state,'WAITING', decode(ash.wait_class, 'User I/O',1,0),0))    "WAIT" 
     ,sum(decode(ash.session_state,'WAITING', decode(ash.wait_class, 'User I/O',1,0),0))    "IO" 
     ,sum(decode(ash.session_state,'ON CPU',1,1))     "TOTAL"
     ,du.username,ash2.SQL_EXEC_ID,
          dp.owner||nvl2(dp.object_name,'.'||dp.object_name,null) ||nvl2(dp.procedure_name,'.'||dp.procedure_name,null) as pl_sql_obj
          ,ash2.machine as blocking_machine
from dba_hist_active_sess_history ash
  left join dba_objects do on do.object_id=ash.CURRENT_OBJ#
  join dba_hist_active_sess_history ash2 on ash.BLOCKING_SESSION=ash2.session_id and ash.BLOCKING_SESSION_SERIAL#=ash2.session_serial# and ash.SNAP_ID=ash2.SNAP_ID
    join dba_users du on du.USER_ID=ash2.USER_ID
    left join dba_procedures dp on dp.object_id=ash2.PLSQL_ENTRY_OBJECT_ID and dp.subprogram_id=ash.PLSQL_ENTRY_SUBPROGRAM_ID
where ash.SQL_ID is not NULL       
and ash.SAMPLE_TIME >  trunc(sysdate)
group by ash.SQL_EXEC_ID,ash2.SQL_EXEC_ID, ash2.machine, ash.session_id,ash.session_serial#,ash.event,ash.sql_id,ash.top_level_sql_id,ash.BLOCKING_SESSION,ash.BLOCKING_SESSION_SERIAL#, ash2.sql_id    ,du.username,
          dp.owner||nvl2(dp.object_name,'.'||dp.object_name,null) ||nvl2(dp.procedure_name,'.'||dp.procedure_name,null)
               ,do.object_name
having  sum(decode(ash.session_state,'WAITING',1,0)) - sum(decode(ash.session_state,'WAITING', decode(ash.wait_class, 'User I/O',1,0),0))  >0
and max(ash.SAMPLE_TIME) - min (ash.SAMPLE_TIME) > interval '3' minute
order by 1,ash2.sql_exec_id;

result

START#                    |  DURATION# | SQL_ID        | TOP_LEVEL_SQL |      B_SID |  B_SERIAL# | B_SQL_EXEC_ID | EVENT                          | OBJECT_NAME          |        CPU |       WAIT |         IO |      TOTAL | USERNAME   | PL_SQL_OBJ | BLOCKING_MACHINE
------------------------- | ---------- | ------------- | ------------- | ---------- | ---------- | ------------- | ------------------------------ | -------------------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | --------------------------
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |         21 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1832 |      38589 |      16777230 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |      15876 |          0 |      15876 | EOS        | <NULL>     | xxxxxxxapp11.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777232 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2500 |          0 |       2500 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777232 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2500 |          0 |       2500 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777263 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777263 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777483 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777483 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777991 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16777991 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16778307 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16778307 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16779789 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 08.34.21       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |        955 |      30987 |      16779789 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         50 |          0 |         50 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       1638 |          0 |       1638 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       1638 |          0 |       1638 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       1638 |          0 |       1638 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777326 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777326 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777326 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16779119 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16779119 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16779119 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16780460 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16780460 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.33       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16780460 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         26 |          0 |         26 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.40       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       1575 |          0 |       1575 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.40       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16777326 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.40       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16779119 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.40.40       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1520 |      40745 |      16780460 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp10.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 |      16777216 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 |      16777282 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3150 |          0 |       3150 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 |      16780480 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 |      16780498 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 |      25502663 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 |      28318376 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 09.51.05       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |          9 |      28281 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 10.04.18       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |        448 |      52711 |      16777231 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.04.18       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |        448 |      52711 |      16777298 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       1444 |          0 |       1444 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16777300 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |        650 |          0 |        650 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16777300 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |        650 |          0 |        650 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16777311 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16777311 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16777313 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16777313 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16779344 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16779344 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16782095 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16782095 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |        448 |      52711 |      16782272 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16794081 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 |      16794081 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.04.18       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1019 |       6185 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp09.xxxx.local
24.11.2015 10.10.39       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |        766 |      15691 |      16777299 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2812 |          0 |       2812 | EOS        | <NULL>     | xxxxxxxapp11.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777322 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2331 |          0 |       2331 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777322 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2331 |          0 |       2331 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777322 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2331 |          0 |       2331 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777322 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2331 |          0 |       2331 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16779690 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16779690 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16779690 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16779690 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16781164 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16781164 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16781164 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16781164 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         74 |          0 |         74 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         74 |          0 |         74 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         74 |          0 |         74 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.34       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         74 |          0 |         74 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.39       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777279 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.39       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16777322 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2394 |          0 |       2394 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.39       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16779690 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.39       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 |      16781164 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.38.39       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1074 |      21737 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         76 |          0 |         76 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 10.44.45       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1902 |      33545 |      16777324 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3800 |          0 |       3800 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.44.45       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1902 |      33545 |      16777324 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3800 |          0 |       3800 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.44.45       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1902 |      33545 |      16779227 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.44.45       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1902 |      33545 |      16779227 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.44.45       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1902 |      33545 |      16782940 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.44.45       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1902 |      33545 |      16782940 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         38 |          0 |         38 | EOS        | <NULL>     | xxxxxxxapp07.xxxx.local
24.11.2015 10.45.00       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1705 |      47971 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         49 |          0 |         49 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 10.45.00       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1705 |      47971 |      16777323 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       5537 |          0 |       5537 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 10.45.00       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1705 |      47971 |      16778606 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         49 |          0 |         49 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 10.45.00       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1705 |      47971 |      16779700 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         49 |          0 |         49 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 10.45.00       |          8 | by5pctpk8t9f6 | by5pctpk8t9f6 |       1705 |      47971 |      16781227 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         49 |          0 |         49 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 11.10.40       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2846 |      53187 |      16777344 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2812 |          0 |       2812 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 11.10.40       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2846 |      53187 |      16777723 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 11.10.40       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2846 |      53187 |      16778841 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 11.10.40       |          6 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2846 |      53187 | <NULL>        | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         37 |          0 |         37 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 11.19.02       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2521 |      48929 |      16777348 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       2850 |          0 |       2850 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 11.19.02       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2521 |      48929 |      16777366 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 11.19.02       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2521 |      48929 |      16781519 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp04.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777226 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777226 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777226 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777226 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777364 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3969 |          0 |       3969 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777364 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3969 |          0 |       3969 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777364 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3969 |          0 |       3969 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777364 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3969 |          0 |       3969 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16779386 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16779386 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16779386 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.14       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16779386 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.18       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777221 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.18       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777226 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.18       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16777364 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3969 |          0 |       3969 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.34.18       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |         10 |       3747 |      16779386 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp01.xxxx.local
24.11.2015 11.51.07       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2905 |      49181 |      33554433 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 11.51.07       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2905 |      49181 |      33554434 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 11.51.07       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2905 |      49181 |      33554435 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         25 |          0 |         25 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 11.51.07       |          4 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2905 |      49181 |      33554443 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3150 |          0 |       3150 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16777277 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16777308 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16777384 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |       3969 |          0 |       3969 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16777431 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16780048 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16782367 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.04.20       |         10 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2147 |      47127 |      16813229 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |         63 |          0 |         63 | EOS        | <NULL>     | xxxxxxxapp03.xxxx.local
24.11.2015 12.06.20       |         17 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2533 |      57525 |      16777257 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |        101 |          0 |        101 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 12.06.20       |         17 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2533 |      57525 |      16777260 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |        101 |          0 |        101 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 12.06.20       |         17 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2533 |      57525 |      16777390 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |      11413 |          0 |      11413 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 12.06.20       |         17 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2533 |      57525 |      16778344 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |        101 |          0 |        101 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local
24.11.2015 12.06.20       |         17 | by5pctpk8t9f6 | by5pctpk8t9f6 |       2533 |      57525 |      16779246 | enq: TX - row lock contention  | FDC_ROSIM_TICKETS    |          0 |        101 |          0 |        101 | EOS        | <NULL>     | xxxxxxxapp06.xxxx.local

wait events with discriptions

wait events with discriptions published on Комментариев к записи wait events with discriptions нет

original

db file sequential reads

Possible Causes :
· Use of an unselective index
· Fragmented Indexes
· High I/O on a particular disk or mount point
· Bad application design
· Index reads performance can be affected by slow I/O subsystem and/or poor database files layout, which result in a higher average wait time

Actions :
· Check indexes on the table to ensure that the right index is being used
· Check the column order of the index with the WHERE clause of the Top SQL statements
· Rebuild indexes with a high clustering factor
· Use partitioning to reduce the amount of blocks being visited
· Make sure optimizer statistics are up to date
· Relocate ‘hot’ datafiles
· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool
· Inspect the execution plans of the SQL statements that access data through indexes
· Is it appropriate for the SQL statements to access data through index lookups?
· Would full table scans be more efficient?
· Do the statements use the right driving table?
· The optimization goal is to minimize both the number of logical and physical I/Os.

Remarks:
· The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
· Significant db file sequential read wait time is most likely an application issue.
· If the DBA_INDEXES.CLUSTERING_FACTOR of the index approaches the number of blocks in the table, then most of the rows in the table are ordered. This is desirable.

· However, if the clustering factor approaches the number of rows in the table, it means the rows in the table are randomly ordered and thus it requires more I/Os to complete the operation. You can improve the index’s clustering factor by rebuilding the table so that rows are ordered according to the index key and rebuilding the index thereafter.

· The OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING initialization parameters can influence the optimizer to favour the nested loops operation and choose an index access path over a full table scan.

db file scattered reads

Possible Causes :
· The Oracle session has requested and is waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT) to be read into the SGA from disk.
· Full Table scans
· Fast Full Index Scans

Actions :
· Optimize multi-block I/O by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT
· Partition pruning to reduce number of blocks visited
· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool
· Optimize the SQL statement that initiated most of the waits. The goal is to minimize the number of physical
and logical reads.
· Should the statement access the data by a full table scan or index FFS? Would an index range or unique scan be more efficient? Does the query use the right driving table?
· Are the SQL predicates appropriate for hash or merge join?
· If full scans are appropriate, can parallel query improve the response time?
· The objective is to reduce the demands for both the logical and physical I/Os, and this is best
achieved through SQL and application tuning.
· Make sure all statistics are representative of the actual data. Check the LAST_ANALYZED date

Remarks:
· If an application that has been running fine for a while suddenly clocks a lot of time on the db file scattered read event and there hasn’t been a code change, you might want to check to see if one or more indexes has been dropped or become unusable.
· Or whether the stats has been stale.

log file parallel write

Possible Causes :
· LGWR waits while writing contents of the redo log buffer cache to the online log files on disk
· I/O wait on sub system holding the online redo log files

Actions :
· Reduce the amount of redo being generated
· Do not leave tablespaces in hot backup mode for longer than necessary
· Do not use RAID 5 for redo log files
· Use faster disks for redo log files
· Ensure that the disks holding the archived redo log files and the online redo log files are separate so as to avoid contention
· Consider using NOLOGGING or UNRECOVERABLE options in SQL statements

log file sync:

Possible Causes :
· Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
Actions :
· Tune LGWR to get good throughput to disk eg: Do not put redo logs on RAID5
· Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations

Actions :

Tune LGWR to get good throughput to disk eg: Do not put redo logs on RAID5
Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations

buffer busy waits:

Possible Causes :
· Buffer busy waits are common in an I/O-bound Oracle system.
· The two main cases where this can occur are:
· Another session is reading the block into the buffer
· Another session holds the buffer in an incompatible mode to our request
· These waits indicate read/read, read/write, or write/write contention.
· The Oracle session is waiting to pin a buffer .A buffer must be pinned before it can be read or modified. Only one process can pin a
buffer at any one time.

· This wait can be intensified by a large block size as more rows can be contained within the block
· This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is “busy
· It is also often due to several processes repeatedly reading the same blocks (eg: i lots of people scan the same index or data block)
Actions :
· The main way to reduce buffer busy waits is to reduce the total I/O on the system
· Depending on the block type, the actions will differ

Data Blocks

· Eliminate HOT blocks from the application. Check for repeatedly scanned / unselective indexes.
· Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
· Check for ‘right- hand-indexes’ (indexes that get inserted into at the same point by many processes).
· Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .
· Reduce the number of rows per block

Segment Header

· Increase of number of FREELISTs and FREELIST GROUPs

Undo Header

· Increase the number of Rollback Segments

free buffer waits:

Possible Causes :
· This means we are waiting for a free buffer but there are none available in the cache because there are too many dirty buffers in the cache
· Either the buffer cache is too small or the DBWR is slow in writing modified buffers to disk
· DBWR is unable to keep up to the write requests
· Checkpoints happening too fast – maybe due to high database activity and under-sized online redo log files
· Large sorts and full table scans are filling the cache with modified blocks faster than the DBWR is able to write to disk
· If the number of dirty buffers that need to be written to disk is larger than the number that DBWR can write per batch, then these waits can be observed

Actions :
Reduce checkpoint frequency – increase the size of the online redo log files

Examine the size of the buffer cache – consider increasing the size of the buffer cache in the SGA
Set disk_asynch_io = true set
If not using asynchronous I/O increase the number of db writer processes or dbwr slaves
Ensure hot spots do not exist by spreading datafiles over disks and disk controllers
Pre-sorting or reorganizing data can help

enqueue waits

Possible Causes :
· This wait event indicates a wait for a lock that is held by another session (or sessions) in an incompatible mode to the requested mode.

TX Transaction Lock

· Generally due to table or application set up issues

· This indicates contention for row-level lock. This wait occurs when a transaction tries to update or delete rows that are currently
locked by another transaction.

· This usually is an application issue.

TM DML enqueue lock

· Generally due to application issues, particularly if foreign key constraints have not been indexed.

ST lock

· Database actions that modify the UET$ (used extent) and FET$ (free extent) tables require the ST lock, which includes actions such as drop, truncate, and coalesce.

· Contention for the ST lock indicates there are multiple sessions actively performing
· dynamic disk space allocation or deallocation
· in dictionary managed tablespaces

Actions :
· Reduce waits and wait times
· The action to take depends on the lock type which is causing the most problems
· Whenever you see an enqueue wait event for the TX enqueue, the first step is to find out who the blocker is and if there are multiple waiters for the same resource
· Waits for TM enqueue in Mode 3 are primarily due to unindexed foreign key columns.

· Create indexes on foreign keys 10g
· Following are some of the things you can do to minimize ST lock contention in your database:
· Use locally managed tablespaces
· Recreate all temporary tablespaces using the CREATE TEMPORARY TABLESPACE TEMPFILE… command. Cache

buffer chain latch Possible Causes :
· Processes need to get this latch when they need to move buffers based on the LRU block replacement policy in the buffer cache
· The cache buffer lru chain latch is acquired in order to introduce a new block into the buffer cache and when writing a buffer back to disk, specifically when trying to scan the LRU (least recently used) chain containing all the dirty blocks in the buffer cache. Competition for the cache buffers lru chain . · latch is symptomatic of intense buffer cache activity caused by inefficient SQL statements. Statements that repeatedly scan
· large unselective indexes or perform full table scans are the prime culprits.
· Heavy contention for this latch is generally due to heavy buffer cache activity which can be caused, for example, by:
Repeatedly scanning large unselective indexes Actions : Contention in this latch can be avoided implementing multiple buffer pools or increasing the number of LRU latches with the parameter DB_BLOCK_LRU_LATCHES (The default value is generally sufficient for most systems). Its possible to reduce contention for the cache buffer lru chain latch by increasing the size of the buffer cache and thereby reducing the rate at which new blocks are introduced into the buffer cache.

Direct Path Reads Possible Causes :
· These waits are associated with direct read operations which read data directly into the sessions PGA bypassing the SGA
· The “direct path read” and “direct path write” wait events are related to operations that are performed in PGA like sorting, group by operation, hash join
· In DSS type systems, or during heavy batch periods, waits on “direct path read” are quite normal However, for an OLTP system these waits are significant
· These wait events can occur during sorting operations which is not surprising as direct path reads and writes usually occur in connection with temporary tsegments
· SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION, DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is larger than the work area in the PGA Actions :
Ensure the OS asynchronous IO is configured correctly. Check for IO heavy sessions / SQL and see if the amount of IO can be reduced. Ensure no disks are IO bound. Set your PGA_AGGREGATE_TARGET to appropriate value (if the parameter WORKAREA_SIZE_POLICY = AUTO) Or set *_area_size manually (like sort_area_size and then you have to set WORKAREA_SIZE_POLICY = MANUAL Whenever possible use UNION ALL instead of UNION, and where applicable use HASH JOIN instead of SORT MERGE and NESTED LOOPS instead of HASH JOIN. Make sure the optimizer selects the right driving table. Check to see if the composite index’s columns can be rearranged to match the ORDER BY clause to avoid sort entirely. Also, consider automating the SQL work areas using PGA_AGGREGATE_TARGET in Oracle9i Database. Query V$SESSTAT to identify sessions with high “physical reads direct”

Remark:
· Default size of HASH_AREA_SIZE is twice that of SORT_AREA_SIZE

· Larger HASH_AREA_SIZE will influence optimizer to go for hash joins instead of nested loops

· Hidden parameter DB_FILE_DIRECT_IO_COUNT can impact the direct path read performance.It sets the maximum I/O buffer size of direct read and write operations. Default is 1M in 9i

Direct Path Writes:

Possible Causes :
· These are waits that are associated with direct write operations that write data from users’ PGAs to data files or temporary tablespaces
· Direct load operations (eg: Create Table as Select (CTAS) may use this)
· Parallel DML operations
· Sort IO (when a sort does not fit in memory

Actions :
If the file indicates a temporary tablespace check for unexpected disk sort operations.
Ensure
is TRUE . This is unlikely to reduce wait times from the wait event timings but
may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings).
Ensure the OS asynchronous IO is configured correctly.
Ensure no disks are IO bound

Latch Free Waits
Possible Causes :
· This wait indicates that the process is waiting for a latch that is currently busy (held by another process).
· When you see a latch free wait event in the V$SESSION_WAIT view, it means the process failed to obtain the latch in the
willing-to-wait mode after spinning _SPIN_COUNT times and went to sleep. When processes compete heavily for latches, they will also consume more CPU resources because of spinning. The result is a higher response time

Actions :
· If the TIME spent waiting for latches is significant then it is best to determine which latches are suffering from contention.
Remark:
· A latch is a kind of low level lock. Latches apply only to memory structures in the SGA. They do not apply to database objects. An Oracle SGA has many latches, and they exist to protect various memory structures from potential corruption by concurrent access.

· The time spent on latch waits is an effect, not a cause; the cause is that you are doing too many block gets, and block gets require cache buffer chain latching

Library cache latch

Possible Causes :
· The library cache latches protect the cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache.

· Application is making heavy use of literal SQL- use of bind variables will reduce this latch considerably

Actions :
· Latch is to ensure that the application is reusing as much as possible SQL statement representation. Use bind variables whenever ossible in the application.

· You can reduce the library cache latch hold time by properly setting the SESSION_CACHED_CURSORS parameter.
· Consider increasing shared pool.
Remark:
· Larger shared pools tend to have long free lists and processes that need to allocate space in them must spend extra time scanning the long free lists while holding the shared pool latch
· if your database is not yet on Oracle9i Database, an oversized shared pool can increase the contention for the shared pool latch..

Shared pool latch

Possible Causes :
The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool

Contentions for the shared pool and library cache latches are mainly due to intense hard parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed

The cost of parsing a new SQL statement is expensive both in terms of CPU requirements and the number of times the library cache and shared pool latches may need to be acquired and released.

Actions :
· Ways to reduce the shared pool latch are, avoid hard parses when possible, parse once, execute many.
· Eliminating literal SQL is also useful to avoid the shared pool latch. The size of the shared_pool and use of MTS (shared server option) also greatly influences the shared pool latch.
· The workaround is to set the initialization parameter CURSOR_SHARING to FORCE. This allows statements that differ in literal
values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and hard parse.

Row cache objects latch

Possible Causes :
This latch comes into play when user processes are attempting to access the cached data dictionary values.

Actions :
· It is not common to have contention in this latch and the only way to reduce contention for this latch is by increasing the size of the shared pool (SHARED_POOL_SIZE).
· Use Locally Managed tablespaces for your application objects especially indexes
· Review and amend your database logical design , a good example is to merge or decrease the number of indexes on tables with heavy inserts
Remark:
· Configuring the library cache to an acceptable size usually ensures that the data dictionary cache is also properly sized. So tuning Library Cache will tune Row Cache indirectly.

trace trigger

trace trigger published on Комментариев к записи trace trigger нет

grant execute on dbms_monitor to some_user

db username:

CREATE OR REPLACE TRIGGER trace_some_user
AFTER LOGON ON some_user.SCHEMA
BEGIN
        execute immediate 'alter session set timed_statistics = true';
      execute immediate 'alter session set max_dump_file_size = unlimited';
      execute immediate 'alter session set tracefile_identifier = ''some_user''';
      dbms_monitor.session_trace_enable (null,null,true,true);
END;

Os username:

CREATE OR REPLACE TRIGGER trace_some_user
AFTER LOGON ON database
declare
v_user varchar2(100);
begin
SELECT SYS_CONTEXT ('USERENV', 'OS_USER') into v_user from dual;

if v_user='some_user' then
  execute immediate 'alter session set timed_statistics = true';
      execute immediate 'alter session set max_dump_file_size = unlimited';
      execute immediate 'alter session set tracefile_identifier = ''some_user''';
      dbms_monitor.session_trace_enable (null,null,true,true);
end if;

END;

find current effective hidden param

find current effective hidden param published on Комментариев к записи find current effective hidden param нет
col PARAMETER format a20
col SESSION_VALUE format a20
col INSTANCE_VALUE format a20
col DEFAULT_VALUE format a20
col DESCRIPTION format a100
select a.ksppinm as Parameter,
b.ksppstvl Session_Value, 
c.ksppstvl Instance_Value ,
b.ksppstdf Default_value,
decode(bitand(a.ksppiflg/256,3),1, 'True', 'False') SESSMOD,
decode(bitand(a.ksppiflg/65536,3),1,'IMMEDIATE',2,'DEFERRED',3,'IMMEDIATE','FALSE') SYSMOD,
a.ksppdesc Description 
from sys.x$ksppi a, sys.x$ksppcv b , sys.x$ksppsv c where a.indx = b.indx and a.indx = c.indx
and REGEXP_LIKE (ksppinm, '^\_[^\_]')
and lower(b.ksppstdf) !='true' 
order by a.ksppinm;

script to analyze partition tables data distribution

script to analyze partition tables data distribution published on Комментариев к записи script to analyze partition tables data distribution нет
select table_owner||'.'||table_name,partition_name,100* (round (num_rows/ case when
        sum(num_rows) over( partition by table_name) >0 then sum(num_rows) over( partition by table_name)
        else 1
        end        
        ,3)) as pct_of_rows
        ,num_rows, sum(num_rows) over( partition by table_name)  total_row_cnt
        from dba_tab_partitions
    where table_name in 
    ( select table_name from DBA_PART_TABLES dpt where dpt.owner not like 'SYS%' and dpt.interval is null  and dpt.partitioning_type like '%RANGE%' ) 
order by 1,3 desc nulls last;

TABLE_NAME                               | PARTITION_NAME                 | PCT_OF_ROWS |   NUM_ROWS | TOTAL_ROW_CNT
---------------------------------------- | ------------------------------ | ----------- | ---------- | -------------
XXXX.OPN_HIS                             | P_MAX                          |         100 |   63308846 |      63309145
XXXX.OPN_HIS                             | P_2012_09                      |           0 |          2 |      63309145 
XXXX.OPN_HIS                             | P_2012_10                      |           0 |        297 |      63309145
XXXX.OPN_HIS                             | P_2012_08                      | <NULL>      | <NULL>     |      63309145

disk latency from ash by time

disk latency from ash by time published on 1 комментарий к записи disk latency from ash by time

some queryes to measure disk io

 select 100*(round ( count (*)/sum(count(*)) over(),2 )) as pct ,nvl(wait_class,'CPU') wait_class from v$active_session_history group by wait_class order by 1 desc;
 select 100*(round ( count (*)/sum(count(*)) over(),2 )) as pct ,nvl(event,'CPU') event from v$active_session_history where wait_class like '%I/O%' group by event order by 1 desc; 

generate pivot list

 select listagg( ''''||event||'''',',') within group (order by event )from v$active_session_history where wait_class like '%I/O%' group by event ;

or top wait pivot list:

with disk_events as  (
  select 100*(round ( count (*)/sum(count(*)) over(),2 )) as pct ,nvl(event,'CPU') event from v$active_session_history where wait_class like '%I/O%' group by event )
 select listagg( ''''||event||'''',',') within group (order by event )from disk_events where pct >5 ;

query and result

new:select * from
(select event,case when est_waits >0  
      then  round (est_dbtime_ms / est_waits,1)
              
                else null
                end
  as est_avg_latency_ms,
    time#
from ( 
      select event,
        round(
              sum(
                    case when time_waited >0 
                    then greatest(1,1000000/time_waited) 
                            else 0 end )
            ) as est_waits,
      sum(1000) as est_dbtime_ms ,
 TRUNC( ash.SAMPLE_TIME, 'MI') - mod( EXTRACT(minute FROM  ash.SAMPLE_TIME), 5) /(24 * 60) time#
from v$active_session_history ash 
  where ash.wait_class ='User I/O' 
      group by TRUNC(SAMPLE_TIME, 'MI') - mod( EXTRACT(minute FROM SAMPLE_TIME), 5) /(24 * 60) ,event 
      )    )
 pivot ( sum(est_avg_latency_ms) for event in (
 'db file scattered read','db file sequential read','log file parallel write')
 ) order by time#

TIME#               |                'db file scattered read' |               'db file sequential read' |               'log file parallel write'
------------------- | --------------------------------------- | --------------------------------------- | ---------------------------------------
10.08.2015 14.10.00 |                                     2,2 |                                      ,3 |                                  <NULL>
10.08.2015 14.15.00 |                                     2,4 |                                      ,2 |                                  <NULL>
10.08.2015 14.20.00 |                                     1,7 |                                      ,3 |                                  <NULL>
10.08.2015 14.25.00 |                                     1,3 |                                      ,3 |                                  <NULL>
10.08.2015 14.30.00 |                                     3,5 |                                      ,2 |                                  <NULL>
10.08.2015 14.35.00 |                                     1,8 |                                      ,3 |                                  <NULL>
10.08.2015 14.40.00 |                                     1,1 |                                      ,3 |                                  <NULL>
10.08.2015 14.45.00 |                                     2,6 |                                      ,2 |                                  <NULL>
10.08.2015 14.50.00 |                                     1,5 |                                      ,3 |                                  <NULL>
10.08.2015 14.55.00 |                                     1,2 |                                      ,3 |                                  <NULL>
10.08.2015 15.00.00 |                                     2,6 |                                      ,2 |                                  <NULL>
10.08.2015 15.05.00 |                                     1,3 |                                      ,4 |                                  <NULL>
10.08.2015 15.10.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 15.15.00 |                                       3 |                                      ,2 |                                  <NULL>
10.08.2015 15.20.00 |                                     1,4 |                                      ,3 |                                  <NULL>
10.08.2015 15.25.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 15.30.00 |                                     3,1 |                                      ,2 |                                  <NULL>
10.08.2015 15.35.00 |                                     1,8 |                                      ,4 |                                  <NULL>
10.08.2015 15.40.00 |                                     1,2 |                                      ,3 |                                  <NULL>
10.08.2015 15.45.00 |                                     2,5 |                                      ,2 |                                  <NULL>
10.08.2015 15.50.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 15.55.00 |                                     1,4 |                                      ,3 |                                  <NULL>
10.08.2015 16.00.00 |                                     4,8 |                                      ,2 |                                  <NULL>
10.08.2015 16.05.00 |                                     1,8 |                                      ,4 |                                  <NULL>
10.08.2015 16.10.00 |                                     1,2 |                                      ,4 |                                  <NULL>
10.08.2015 16.15.00 |                                       3 |                                      ,2 |                                  <NULL>
10.08.2015 16.20.00 |                                     1,3 |                                      ,3 |                                  <NULL>
10.08.2015 16.25.00 |                                     1,1 |                                      ,3 |                                  <NULL>
10.08.2015 16.30.00 |                                     3,1 |                                      ,2 |                                  <NULL>
10.08.2015 16.35.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 16.40.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 16.45.00 |                                     2,9 |                                      ,2 |                                  <NULL>
10.08.2015 16.50.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 16.55.00 |                                     1,3 |                                      ,2 |                                  <NULL>
10.08.2015 17.00.00 |                                     2,2 |                                      ,2 |                                  <NULL>
10.08.2015 17.05.00 |                                     1,5 |                                      ,4 |                                  <NULL>
10.08.2015 17.10.00 |                                     1,7 |                                      ,3 |                                  <NULL>
10.08.2015 17.15.00 |                                       3 |                                      ,2 |                                  <NULL>
10.08.2015 17.20.00 |                                     1,8 |                                      ,3 |                                  <NULL>
10.08.2015 17.25.00 |                                     1,3 |                                      ,3 |                                  <NULL>
10.08.2015 17.30.00 |                                     3,3 |                                      ,3 |                                  <NULL>
10.08.2015 17.35.00 |                                     1,6 |                                      ,3 |                                  <NULL>
10.08.2015 17.40.00 |                                      ,7 |                                      ,3 |                                  <NULL>

this output can be used to build graphs:

Снимок экрана 2015-08-10 в 17.38.43

awr report by hour

awr report by hour published on Комментариев к записи awr report by hour нет

simplest way to generate awr report by hour after benchmark
export awr

@?/rdbms/admin/awrextr.sql 

import awr

@?/rdbms/admin/awrload.sql

— make sure to set line size appropriately
— set linesize 152

generate awr

 set termout off
set linesize 80
set pagesize 10000
select 'spool awrrpt_dwhfrn_'|| snap_id ||'.html'|| chr(13)||
 'select output from table(dbms_workload_repository.awr_report_text('||dbid||',1,'||snap_id||','|| LEAD (snap_id, 1)  OVER (ORDER BY  dbid,snap_id ) ||',0));' || chr(13)||
 ' spool off '
 from dba_hist_snapshot where dbid !=1399642255 and extract( minute from begin_interval_time ) <2 ;

Oracle: latch: cache buffers chains

Oracle: latch: cache buffers chains published on Комментариев к записи Oracle: latch: cache buffers chains нет

save at my blog, original

Waits on the cache buffer chains latch, ie the wait event «latch: cache buffers chains» happen when there is extremely high and concurrent access to the same block in a database. Access to a block is normally a fast operation but if concurrent users access a block fast enough, repeatedly then simple access to the block can become an bottleneck. The most common occurance of cbc (cache buffer chains) latch contention happens when multiple users are running nest loop joins on a table and accessing the table driven into via an index. Since the NL join is basically a

  For all rows in i
     look up a value in j  where j.field1 = i.val
  end loop

then table j’s index on field1 will get hit for every row returned from i. Now if the lookup on i returns a lot of rows and if multiple users are running this same query then the index root block is going to get hammered on the index j(field1).

In order to solve a CBC latch bottleneck we need to know what SQL is causing the bottleneck and what table or index that the SQL statement is using is causing the bottleneck.

From ASH data this is fairly easy:

    select
          count(*),
          sql_id,
          nvl(o.object_name,ash.current_obj#) objn,
          substr(o.object_type,0,10) otype,
          CURRENT_FILE# fn,
          CURRENT_BLOCK# blockn
    from  v$active_session_history ash
        , all_objects o
    where event like 'latch: cache buffers chains'
      and o.object_id (+)= ash.CURRENT_OBJ#
    group by sql_id, current_obj#, current_file#,
                   current_block#, o.object_name,o.object_type
    order by count(*)
    /        

From the out put it looks like we have both the SQL (at least the id, we can get the text with the id) and the block:

    CNT SQL_ID        OBJN     OTYPE   FN BLOCKN
    ---- ------------- -------- ------ --- ------
      84 a09r4dwjpv01q MYDUAL   TABLE    1  93170

But the block actually is probably left over from a recent IO and not actually the CBC hot block though it might be.
We can investigate further to get more information by looking at P1, P2 and P3 for the CBC latch wait. How can we find out what P1, P2 and P3 mean? by looking them up in V$EVENT_NAME:

    select * from v$event_name
    where name = 'latch: cache buffers chains'

    EVENT#     NAME                         PARAMETER1 PARAMETER2 PARAMETER3
    ---------- ---------------------------- ---------- ---------- ----------
            58 latch: cache buffers chains     address     number      tries 

So P1 is the address of the latch for the cbc latch wait.
Now we can group the CBC latch waits by the address and find out what address had the most waits:

    select
        count(*),
        lpad(replace(to_char(p1,'XXXXXXXXX'),' ','0'),16,0) laddr
    from v$active_session_history
    where event='latch: cache buffers chains'
    group by p1
    order by count(*);

    COUNT(*)  LADDR
    ---------- ----------------
          4933 00000004D8108330   

 

In this case, there is only one address that we had waits for, so now we can look up what blocks (headers actually) were at that address

   select o.name, bh.dbarfil, bh.dbablk, bh.tch
    from x$bh bh, obj$ o
    where tch > 5
      and hladdr='00000004D8108330'
      and o.obj#=bh.obj
    order by tch

    NAME        DBARFIL DBABLK  TCH
    ----------- ------- ------ ----
    EMP_CLUSTER       4    394  120

We look for the block with the highest «TCH» or «touch count». Touch count is a count of the times the block has been accesses. The count has some restrictions. The count is only incremented once every 3 seconds, so even if I access the block 1 million times a second, the count will only go up once every 3 seconds. Also, and unfortunately, the count gets zeroed out if the block cycles through the buffer cache, but probably the most unfortunate is that this analysis only works when the problem is currently happening. Once the problem is over then the blocks will usually get pushed out of the buffer cache.

In the case where the CBC latch contention is happening right now we can run all of this analysis in one query

   select
            name, file#, dbablk, obj, tch, hladdr
    from x$bh bh
        , obj$ o
     where
           o.obj#(+)=bh.obj and
          hladdr in
    (
        select ltrim(to_char(p1,'XXXXXXXXXX') )
        from v$active_session_history
        where event like 'latch: cache buffers chains'
        group by p1
        having count(*) > 5
    )
       and tch > 5
    order by tch

example output

    NAME          FILE# DBABLK    OBJ TCH HLADDR
    ------------- ----- ------ ------ --- --------
    BBW_INDEX         1 110997  66051  17 6BD91180
    IDL_UB1$          1  54837     73  18 6BDB8A80
    VIEW$             1   6885     63  20 6BD91180
    VIEW$             1   6886     63  24 6BDB8A80
    DUAL              1   2082    258  32 6BDB8A80
    DUAL              1   2081    258  32 6BD91180
    MGMT_EMD_PING     3  26479  50312 272 6BDB8A80

This can be misleading, as TCH gets set to 0 every rap around the LRU and it only gets updated once every 3 seconds, so in this case DUAL was my problem table not MGMT_EMD_PING

Deeper Analysis from Tanel Poder

http://blog.tanelpoder.com/2009/08/27/latch-cache-buffers-chains-latch-contention-a-better-way-for-finding-the-hot-block/comment-page-1/#comment-2437

Using Tanel’s ideas here’s a script to get the objects that we have the most cbc latch waits on

    col object_name for a35
    col cnt for 99999

    SELECT
      cnt, object_name, object_type,file#, dbablk, obj, tch, hladdr
    FROM (
      select count(*) cnt, rfile, block from (
        SELECT /*+ ORDERED USE_NL(l.x$ksuprlat) */
          --l.laddr, u.laddr, u.laddrx, u.laddrr,
          dbms_utility.data_block_address_file(to_number(object,'XXXXXXXX')) rfile,
          dbms_utility.data_block_address_block(to_number(object,'XXXXXXXX')) block
        FROM
           (SELECT /*+ NO_MERGE */ 1 FROM DUAL CONNECT BY LEVEL <= 100000) s,
           (SELECT ksuprlnm LNAME, ksuprsid sid, ksuprlat laddr,
           TO_CHAR(ksulawhy,'XXXXXXXXXXXXXXXX') object
            FROM x$ksuprlat) l,
           (select  indx, kslednam from x$ksled ) e,
           (SELECT
                        indx
                      , ksusesqh     sqlhash
       , ksuseopc
       , ksusep1r laddr
                 FROM x$ksuse) u
        WHERE LOWER(l.Lname) LIKE LOWER('%cache buffers chains%')
         AND  u.laddr=l.laddr
         AND  u.ksuseopc=e.indx
         AND  e.kslednam like '%cache buffers chains%'
        )
       group by rfile, block
       ) objs,
         x$bh bh,
         dba_objects o
    WHERE
          bh.file#=objs.rfile
     and  bh.dbablk=objs.block
     and  o.object_id=bh.obj
    order by cnt
    ;

    CNT  OBJECT_NAME       TYPE  FILE#  DBABLK    OBJ   TCH  HLADDR
    ---- ----------------- ----- ----- ------- ------ ----- --------
       1 WB_RETROPAY_EARNS TABLE     4   18427  52701  1129 335F7C00
       1 WB_RETROPAY_EARNS TABLE     4   18194  52701  1130 335F7C00
       3 PS_RETROPAY_RQST  TABLE     4   13253  52689  1143 33656D00
       3 PS_RETROPAY_RQST  INDEX     4   13486  52692   997 33656D00
       3 WB_JOB            TABLE     4   14443  52698   338 335B9080
       5 PS_RETROPAY_RQST  TABLE     4   13020  52689   997 33656D00
       5 WB_JOB            TABLE     4   14676  52698   338 335B9080
       5 WB_JOB            TABLE     4   13856  52698   338 335F7C00
       6 WB_JOB            TABLE     4   13623  52698   338 335F7C00
       7 WB_JOB            TABLE     4   14909  52698   338 335B9080
     141 WB_JOB            TABLE     4   15142  52698   338 335B9080
    2513 WB_JOB            INDEX     4   13719  52699   997 33656D00

Why do we get cache buffers chains latch contention?

In order to understand why we get CBC latch contention we have to understand what the CBC latch protects. The CBC latch protects information controlling the buffer cache. Here is a schematic of computer memory and the Oracle processes, SGA and the main components of the SGA:

oracle_memory_processes

The buffer cache holds in memory versions of datablocks for faster access. Can you imagine though how we find a block we want in the buffer cache? The buffer cache doesn’t have a index of blocks it contains and we certainly don’t scan the whole cache looking for the block we want (though I have heard that as a concern when people increase the size of there buffer cache). The way we find a block in the buffer cache is by taking the block’s address, ie it’s file and block number and hashing it. What’s hashing? A simple example of hashing is the «Modulo» function

1 mod 4 = 1
2 mod 4 = 2
3 mod 4 = 3
4 mod 4 = 0
5 mod 4 = 1
6 mod 4 = 2
7 mod 4 = 3
8 mod 4 = 0

Using «mod 4» as a hash funtion creates 4 possible results. These results are used by Oracle as «buckets» or identifiers of locations to store things. The things in this case will be block headers.

buffer_cache_buckets

Block headers are meta data about data block including pointers to the actual datablock as well as pointers to the other headers in the same bucket.

buffer_cache_block_header x$BH

The block headers in the hash buckets are connected via a doubly linked list. One link points forward the other points backwards

buffer_cache_linked_lists_top

The resulting layout looks like

buffer_cache

the steps to find a block in the cache are

buffer_cache_steps_to_get_block

If there are a lot of sessions concurrently accessing the same buffer header (or buffer headers in the same bucket) then the latch that protects that bucket will get hot and users will have to wait getting «latch: cache buffers chains» wait.

buffer_cache_cbc_longchain

Two ways this can happen (among probably several others)

buffer_cache_cbc_two_cases

For the nested loops example, Oracle will in some (most?) cases try and pin the root block of the index because Oracle knows we will be using it over and over. When a block is pinned we don’t have to use the cbc latch. There seem to be cases (some I think might be bugs) where the root block doesn’t get pinned. (I want to look into this more — let me know if you have more info)

One thing that can make CBC latch contention worse is if a session is modifying the data block that users are reading because readers will clone a block with uncommitted changes and roll back the changes in the cloned block:

buffer_cache_cbc_consistent_read

all these clone copies will go in the same bucket and be protected by the same latch:

buffer_cache_cbc_contention

How many copies of a block are in the cache?

    select
           count(*)
         , name
         , file#
         , dbablk
         , hladdr
    from   x$bh bh
              , obj$ o
    where
          o.obj#(+)=bh.obj and
          hladdr in
    (
        select ltrim(to_char(p1,'XXXXXXXXXX') )
        from v$active_session_history
        where event like 'latch: cache%'
        group by p1
    )
    group by name,file#, dbablk, hladdr
    having count(*) > 1
    order by count(*);

    CNT NAME        FILE#  DBABLK HLADDR
    --- ---------- ------ ------- --------
     14 MYDUAL          1   93170 2C9F4B20

Notice that the number of copies, 14, is higher the the max number of copies allowed set by «_db_block_max_cr_dba = 6» in 10g. The reason is this value is just a directive not a restriction. Oracle tries to limit the number of copies.

Solutions

Find SQL ( Why is application hitting the block so hard? )
Possibly change application logic
Eliminate hot spots
Nested loops, possibly
Hash Partition the index with hot block
Use Hash Join instead of Nested loop join
Use Hash clusters
Look up tables (“select language from lang_table where …”)
Change application
Use plsql function
Spread data out to reduce contention, like set PCTFREE to 0 and recreate the table so that there is only one row per block
Select from dual
Possibly use x$dual
Note starting in 10g Oracle uses the «fast dual» table (ie x$dual) automatically when executing a query on dual as long as the column «dummy» is not accessed. Accessing dummy would be cases like
select count(*) from dual;
select * from dual;
select dummy from dual;
an example of not accessing «dummy» would be:
select 1 from dual;
select sysdate from dual;

Updates, inserts , select for update on blocks while reading those blocks
Cause multiple copies and make things worse

What would OEM do?

In DB Optimizer:

Other References
http://blog.tanelpoder.com/2009/08/27/latch-cache-buffers-chains-latch-contention-a-better-way-for-finding-the-hot-block
http://www.pythian.com/news/1135/tuning-latch-contention-cache-buffers-chain-latches/
http://www.oaktable.net/content/latch-cache-buffer-chains-small-index-primary-key-caused-concurrent-batch-scripts-select-sta#comment-6
http://jonathanlewis.wordpress.com/2008/02/09/index-rebuild-10g/

 

 

http://www.programering.com/a/MzNzkzMwATM.html

Primary Sidebar

Яндекс.Метрика