Skip to content

ORA-27300 ORA-27301 ORA-27302 ORA-27157 Database Crash

ORA-27300 ORA-27301 ORA-27302 ORA-27157 Database Crash published on Комментариев к записи ORA-27300 ORA-27301 ORA-27302 ORA-27157 Database Crash нет

test upgrade to 12c on RedHat 7.2 faced with

Errors in file /oracle/app/base/diag/asm/+asm/+ASM1/trace/+ASM1_smon_11227.trc:
ORA-27300: OS system dependent operation:semctl failed with status: 22
ORA-27301: OS failure message: Invalid argument
ORA-27302: failure occurred at: sskgpwrm1
ORA-27157: OS post/wait facility removed
ORA-27300: OS system dependent operation:semop failed with status: 43
ORA-27301: OS failure message: Identifier removed
ORA-27302: failure occurred at: sskgpwwait1

WA is to set RemoveIPC=no in /etc/systemd/logind.conf
ps ( MOS Doc ID 438205.1)

postgresql bloat monitoring script

postgresql bloat monitoring script published on 1 комментарий к записи postgresql bloat monitoring script
select * from (
SELECT
  current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/
  ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::FLOAT/otta END)::NUMERIC,1) AS tbloat,
  CASE WHEN relpages < otta THEN 0 ELSE round(bs*((sml.relpages-otta)::bigint)/1024/1024,1) END AS wastedMBytes,
  iname, /*ituples::bigint, ipages::bigint, iotta,*/
  ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::FLOAT/iotta END)::NUMERIC,1) AS ibloat,
  CASE WHEN ipages < iotta THEN 0 ELSE round(bs*((ipages-iotta)/1024/1024)::bigint,1) END AS wastediMBytes
FROM (
  SELECT
    schemaname, tablename, cc.reltuples, cc.relpages, bs,
    CEIL((cc.reltuples*((datahdr+ma-
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::FLOAT)) AS otta,
    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::FLOAT)),0) AS iotta -- very rough approximation, assumes all cols
  FROM (
    SELECT
      ma,bs,schemaname,tablename,
      (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::NUMERIC AS datahdr,
      (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
    FROM (
      SELECT
        schemaname, tablename, hdr, ma, bs,
        SUM((1-null_frac)*avg_width) AS datawidth,
        MAX(null_frac) AS maxfracsum,
        hdr+(
          SELECT 1+COUNT(*)/8
          FROM pg_stats s2
          WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename
        ) AS nullhdr
      FROM pg_stats s, (
        SELECT
          (SELECT current_setting('block_size')::NUMERIC) AS bs,
          CASE WHEN SUBSTRING(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,
          CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma
        FROM (SELECT version() AS v) AS foo
      ) AS constants
      GROUP BY 1,2,3,4,5
    ) AS foo
  ) AS rs
  JOIN pg_class cc ON cc.relname = rs.tablename
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'
  LEFT JOIN pg_index i ON indrelid = cc.oid
  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml
) bloat 
where (bloat.tbloat > 5 or ibloat >5)  and wastedMBytes > 100 or  wastedMBytes > 1024
ORDER BY wastedMBytes DESC;

took from https://wiki.postgresql.org/wiki/Show_database_bloat

mass move tables by dbms_redifinition

mass move tables by dbms_redifinition published on Комментариев к записи mass move tables by dbms_redifinition нет

++ redefinition

–ALTER USER “XXXXXX” QUOTA UNLIMITED ON XXXXXX;

 
create table exampl_redif (a clob, i int);
 
delete from exampl_redif;
insert into exampl_redif values('
  exec dbms_redefinition.can_redef_table (''REDIFINISHION_TABLE_OWNER'',''REDIFINISHION_TABLE_NAME'',dbms_redefinition.cons_use_rowid);
 
  BEGIN
   DBMS_REDEFINITION.start_redef_table (''REDIFINISHION_TABLE_OWNER'',
                                        ''REDIFINISHION_TABLE_NAME'',
                                        ''REDIFINISHION_TABLE_NAME_REDEF'',
                                        NULL,
                                        DBMS_REDEFINITION.cons_use_rowid);
END;
/
 
 
 
 
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE(''REDIFINISHION_TABLE_OWNER'',
                                        ''REDIFINISHION_TABLE_NAME'',
                                        ''REDIFINISHION_TABLE_NAME_REDEF'');
END;
/
 
 
DECLARE
   num_errors   PLS_INTEGER;
BEGIN
   DBMS_REDEFINITION.copy_table_dependents (''REDIFINISHION_TABLE_OWNER'',
                                            ''REDIFINISHION_TABLE_NAME'',
                                            ''REDIFINISHION_TABLE_NAME_REDEF'',
                                            1,
                                            TRUE,
                                            TRUE,
                                            TRUE,
                                            TRUE,
                                            num_errors,
                                            TRUE);
END;
/
 
 
begin
for i in (
select ''alter index ''||owner||''."''||index_name||''" rebuild tablespace REDIFINISHION_TABLESPACE_NEW online'' isql from all_indexes where TABLE_NAME=''REDIFINISHION_TABLE_NAME_REDEF'' and tablespace_name=''REDIFINISHION_TABLESPACE_OLD''
)
loop
execute immediate i.isql;
end loop;
end;
/
 
 
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(''REDIFINISHION_TABLE_OWNER'',
                                        ''REDIFINISHION_TABLE_NAME'',
                                        ''REDIFINISHION_TABLE_NAME_REDEF'');
END;
/
 
begin
for i in (
select ''alter table ''||owner||''.''||table_name||'' drop constraint ''||constraint_name isql from all_constraints where r_constraint_name=(select constraint_name from all_constraints where table_name=''REDIFINISHION_TABLE_NAME_REDEF'' and constraint_type=''P'' and owner=''REDIFINISHION_TABLE_OWNER'') and owner=''REDIFINISHION_TABLE_OWNER'' and status=''DISABLED''
)
loop
execute immediate i.isql;
end loop;
end;
/
 
DROP TABLE REDIFINISHION_TABLE_OWNER.REDIFINISHION_TABLE_NAME_REDEF;
 
 
',1);
 
commit;
 
 
 
 
 
 
 
 
select owner,table_name,
replace(
    replace(
       dbms_metadata.get_ddl('TABLE',table_name,owner),'"'||owner||'"."'||table_name||'"','"'||owner||'"."'||table_name||'_REDEF"'
           ),
'TABLESPACE "USERS"','TABLESPACE "NEW_TABLESPACE"'
       ) || ';' ||
                replace(
                  replace(
                   replace(
                    replace(
                      (select a from  exampl_redif),'REDIFINISHION_TABLE_OWNER',owner
                            ),'REDIFINISHION_TABLE_NAME',table_name
                           ),'REDIFINISHION_TABLESPACE_NEW','PGU'
                          ),'REDIFINISHION_TABLESPACE_OLD','USERS'
                        )                         
from all_tables where TABLESPACE_NAME='USERS' and OWNER='MY_OWNER';

ps: thx to Vladimir Mukin for material

quickest way to get statiscics info

quickest way to get statiscics info published on Комментариев к записи quickest way to get statiscics info нет

select sname, spare4 from sys.optstat_hist_control$;

•gather system stats:

– EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
– exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
– exec DBMS_STATS.GATHER_SYSTEM_STATS(‘interval’, interval=>60)
;

• Switch on incremental statistics for partitioned tables
– DBMS_STATS.SET_GLOBAL_PREFS(‘INCREMENTAL’,’TRUE’);

oracle export data without blob

oracle export data without blob published on Комментариев к записи oracle export data without blob нет

find top space occupants:

   select  round (sum(e.bytes/1024/1024/1024)) as gb_size,
        e.owner,
          case   when e.SEGMENT_NAME like 'SYS_LOB%'  then 'lob_of:'  || l.table_name  ||' colunm:'||l.column_name else  e.segment_name end as segment_name
    from dba_extents e
    left join dba_lobs l on e.segment_name=l.segment_name
        group by e.owner,
case   when e.SEGMENT_NAME like 'SYS_LOB%'  then 'lob_of:'  || l.table_name  ||' colunm:'||l.column_name else  e.segment_name end 
     having round (sum(e.bytes/1024/1024/1024)) >10
              order by 1 desc;


   GB_SIZE | OWNER                    | SEGMENT_NAME
---------- | ------------------------ | ----------------------------------------------------------------------------------------------------
       240 | XXX                      | lob_of:XXX_DOC colunm:DOC_CONTENT
       152 | XXX                      | lob_of:XXX_REVISION_DOCUMENTS_TMP colunm:FILE_CONTENT
        93 | XXX                      | lob_of:XXX_DOC colunm:ARCHIVED_DOC_CONTENT
        63 | XXX                      | lob_of:XXX_CONTRACTS colunm:FILE_CONTENT
        57 | XXX                      | lob_of:XXX_DOC_TMP colunm:DOC_CONTENT
        36 | XXX                      | lob_of:XXX_CONTRACTS colunm:ARCHIVED_DOC_CONTENT
        34 | XXX                      | lob_of:XXX_REVISION_DOCUMENTS colunm:FILE_CONTENT
        33 | XXX                      | lob_of:XXX_BID_ORG_DOCUMENTS colunm:FILE_CONTENT
        33 | XXX                      | lob_of:XXX_DOC_TMP colunm:ARCHIVED_DOC_CONTENT
        32 | XXX                      | lob_of:XXX_PROTOCOL_DOCUMENTS colunm:FILE_CONTENT
        29 | XXX                      | DR$XXX_DOC_CONT_CTX_I$I
        19 | XXX                      | lob_of:XXX_LOT_PHOTO colunm:PHOTO
        18 | XXX                      | lob_of:XXX_PROTOCOL_DOCUMENTS colunm:ARCHIVED_DOC_CONTENT
        17 | XXX                      | lob_of:XXX_ROSIM_ACTS colunm:DOC_CONTENT
        15 | XXX                      | DR$XXX_DOC_CONT_CTX_I$X
        13 | XXX                      | lob_of:XXX_ROSIM_ACTS colunm:ARCHIVED_DOC_CONTENT
        13 | XXX                      | lob_of:XXX_LOT_PHOTO colunm:ARCHIVED_DOC_CONTENT
        11 | XXX                      | lob_of:XXX_REVISION_DOCUMENTS colunm:ARCHIVED_DOC_CONTENT
-------------------------------------------------------

so if we skip blob field at export, we will save 85% of space

database size:

select SUM(BYTES)/1024/1024/1024 from dba_extents
SUM(BYTES)/1024/1024/1024
-------------------------
               1017.16553
 

do export with remap blob field with funtion that return empty blob:

create package\function which return empty blob

create or replace 
package remap as 
  function null_lob(col blob) return blob;
end remap;
/
create or replace
package body remap as
  function null_lob(col blob)
    return blob is
    eblob blob;
  begin
    return null;
  end null_lob;
end remap;
/

Export:

     
 create directory dump as '+data/dump';
 create directory logdir as '/oracle/dump';
 expdp \'sys / as sysdba /' schemas=XXX,XXXXXX,XXX_XXX  dumpfile=DUMP:exp_xxx.dmp logfile=LOGDIR:exp_xxx.log
REMAP_DATA= XXX.FDC_DOC.DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_DOC.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_REVISION_DOCUMENTS_TMP.FILE_CONTENT:SYS.remap.null_lob,XXX.FDC_CONTRACTS.FILE_CONTENT:SYS.remap.null_lob
,XXX.FDC_CONTRACTS.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_DOC_TMP.DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_DOC_TMP.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_REVISION_DOCUMENTS.FILE_CONTENT:SYS.remap.null_lob
,XXX.FDC_REVISION_DOCUMENTS.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_BID_ORG_DOCUMENTS.FILE_CONTENT:SYS.remap.null_lob,XXX.FDC_PROTOCOL_DOCUMENTS.FILE_CONTENT:SYS.remap.null_lob,XXX.FDC_PROTOCOL_DOCUMENTS.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob
,XXX.FDC_ROSIM_ACTS.DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_ROSIM_ACTS.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_LOT_PHOTO.ARCHIVED_DOC_CONTENT:SYS.remap.null_lob,XXX.FDC_LOT_PHOTO.PHOTO:SYS.remap.null_lob

ps: dump size 52gb

found at:
http://dba.stackexchange.com/questions/25540/export-table-without-blob-column

what happens when table does not fit to keep pool

what happens when table does not fit to keep pool published on Комментариев к записи what happens when table does not fit to keep pool нет

create big table for tests:

create table bigtab (
    id      number(12,6),
    v1      varchar2(10),
    padding varchar2(50)
)
nologging   -- just to save a bit of time
/

insert /*+ append ordered full(s1) use_nl(s2) */
into bigtab
select
        3000 + trunc((rownum-1)/500,6),
        to_char(rownum),
        rpad('x',50,'x')
from
        all_objects s1,      -- you'll need the privilege
        all_objects s2
where
        rownum <= 2000000
/
commit;

check size:

21:07:01 (1)[PDBORCL]c##bushmelev_aa@orcl> @seg_space bigtab
old   2: where segment_name=upper ('&1')
new   2: where segment_name=upper ('bigtab')

OWNER           | SEGMENT_NAME                             |     MB_SIZE
--------------- | ---------------------------------------- | -----------
C##BUSHMELEV_AA | BIGTAB                                   |       152.0

Elapsed: 00:00:07.42
23:28:41 (1)[PDBORCL]c##bushmelev_aa@orcl> l
  1  select owner,segment_name,sum(bytes)/1024/1024 as MB_SIZE from dba_extents
  2  where segment_name=upper ('&1')
  3* group by segment_name,owner

ps: describe of x$kcbwds ds, x$kcbwbpd pd you may find at oracle x$ tables

usefull links:
http://enkitec.tv/2012/05/19/oracle-full-table-scans-direct-path-reads-object-level-checkpoints-ora-8103s/
Direct path read and fast full index scans

lets_check:

 select ksppinm as hidden_parameter  , ksppstvl as value
from x$ksppi join x$ksppcv using (indx)
 where ksppinm like '\_small_table_threshold%' escape '\' order by ksppinm;

HIDDEN_PARAMETER          | VALUE
------------------------- | ----------
_small_table_threshold    | 2203

00:25:34 (1)[PDBORCL]sys@orcl> select 2203*8192/1024/1024 from dual;

2203*8192/1024/1024
-------------------
         17.2109375

so our 17 mb table should do DPR on full table scan

now lets full scan bigtable ( there is no indexes on table, so i use count (*) for minimize network influence on results ) an check sga info and session stats :

00:13:23 (1)[PDBORCL]c##bushmelev_aa@orcl> select count (*) from bigtab;

  COUNT(*)
----------
   2000000

using:
Script to Identify Objects and Amount of Blocks in the Buffer Pools – Default, Keep, Recycle, nK Cache (Doc ID 373472.1)

get info about sement in sga:

select BP_ID, decode(pd.bp_id,1,'KEEP',2,'RECYCLE',3,'DEFAULT',4,'2K SUBCACHE',5,'4K SUBCACHE',6,'8K SUBCACHE',7,'16K SUBCACHE',8,'32K SUBCACHE','UNKNOWN') subcache,
bh.object_name, round (sum(bh.blocks)* (select to_number(value) from v$parameter where name='db_block_size') /(1024*1024),2) used_mb
from x$kcbwds ds, x$kcbwbpd pd,
(select /*+ use_hash(x) */ set_ds, o.name object_name, count(*) BLOCKS from obj$ o, x$bh x where o.dataobj#=x.obj and x.state!=0 and o.owner#!=0 and upper(o.name)=upper('&mytable') group by set_ds, o.name) bh
where ds.set_id>=pd.bp_lo_sid and ds.set_id<=pd.bp_hi_sid and pd.bp_size!=0 and ds.addr=bh.set_ds group by pd.BP_ID, decode(pd.bp_id,1,'KEEP',2,'RECYCLE',3,'DEFAULT',4,'2K SUBCACHE',5,'4K SUBCACHE',6,'8K SUBCACHE',7,'16K SUBCACHE',8,'32K SUBCACHE','UNKNOWN') , bh.object_name; 00:50:04 (1)[PDBORCL]sys@orcl> @db_seg_cache2
Enter value for mytable: bigtab

     BP_ID | SUBCACHE   | OBJECT_NAME     |    USED_MB
---------- | ---------- | --------------- | ----------
         3 | DEFAULT    | BIGTAB          |        .01

Screen Shot 2016-01-03 at 00.29.56

now put full table (152mb) to keep_pool:

alter system set db_keep_cache_size=200m;
alter table bigtab storage (buffer_pool keep );

now fullscan works from cache:

Screen Shot 2016-01-03 at 00.37.23

now check buffer pools :

select BP_ID, decode(pd.bp_id,1,'KEEP',2,'RECYCLE',3,'DEFAULT',4,'2K SUBCACHE',5,'4K SUBCACHE',6,'8K SUBCACHE',7,'16K SUBCACHE',8,'32K SUBCACHE','UNKNOWN') subcache,
bh.object_name, round (sum(bh.blocks)* (select to_number(value) from v$parameter where name='db_block_size') /(1024*1024),2) used_mb
from x$kcbwds ds, x$kcbwbpd pd,
(select /*+ use_hash(x) */ set_ds, o.name object_name, count(*) BLOCKS from obj$ o, x$bh x where o.dataobj#=x.obj and x.state!=0 and o.owner#!=0 and upper(o.name)=upper('&mytable') group by set_ds, o.name) bh
where ds.set_id>=pd.bp_lo_sid and ds.set_id<=pd.bp_hi_sid and pd.bp_size!=0 and ds.addr=bh.set_ds
group by pd.BP_ID, decode(pd.bp_id,1,'KEEP',2,'RECYCLE',3,'DEFAULT',4,'2K SUBCACHE',5,'4K SUBCACHE',6,'8K SUBCACHE',7,'16K SUBCACHE',8,'32K SUBCACHE','UNKNOWN') , bh.object_name;

     BP_ID | SUBCACHE   | OBJECT_NAME     |    USED_MB
---------- | ---------- | --------------- | ----------
         3 | DEFAULT    | BIGTAB          |        .01
         1 | KEEP       | BIGTAB          |     148.64

change keep pool to 140mb and repeat test:

alter system set db_keep_cache_size=120m;

00:39:38 (1)[PDBORCL]sys@orcl> /
Enter value for mytable: bigtab

     BP_ID | SUBCACHE   | OBJECT_NAME     |    USED_MB
---------- | ---------- | --------------- | ----------
         3 | DEFAULT    | BIGTAB          |        .01
         1 | KEEP       | BIGTAB          |     133.27

even when 87% of table are in current mode in keep pool session do DPR

01:01:19 (1)[PDBORCL]sys@orcl> @db_seg_cache c##bushmelev_aa bigtab

AREA         | STATUS                         | DISTINCT_BLOCK_COUNT                     | BLOCK_COUNT
------------ | ------------------------------ | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------
BUFFER CACHE | free                           | 3                                        | 3
BUFFER CACHE | xcur                           | 17060                                    | 17060
BUFFER CACHE | summary                        | 17063                                    | 17063
DATABASE     | db blocks                      | 19456                                    |
SGA          | BUFFER CACHE of MAX SGA SIZE   | 754,974,720 of 1,426,063,360             | (Resizeable)

Elapsed: 00:00:00.11
01:01:20 (1)[PDBORCL]sys@orcl> l
  1  with OBJLIST as
  2   (select DATA_OBJECT_ID
  3      from dba_objects
  4     where (owner, object_name) in ((upper(nvl('&&1', 'user')), upper('&&2')))
  5       and DATA_OBJECT_ID is not null)
  6  select 'BUFFER CACHE' as AREA,
  7         nvl(status,'summary') as STATUS,
  8         to_char(count(distinct(file# || '#' || block#))) as DISTINCT_BLOCK_COUNT,
  9         to_char(count(*)) as BLOCK_COUNT
 10    from V$BH, OBJLIST
 11   where objd = OBJLIST.DATA_OBJECT_ID
 12   group by rollup(status)
 13  union all
 14  select 'DATABASE',
 15         'db blocks',
 16         to_char(blocks),
 17         '' as BH_COUNT
 18  from dba_segments where (owner, segment_name) in ((upper(nvl('&&1', 'user')), upper('&&2')))
 19  union all
 20  select 'SGA',
 21         'BUFFER CACHE of MAX SGA SIZE',
 22         trim(to_char(s1.bytes, '999,999,999,999,999')) ||
 23         ' of '||
 24         trim(to_char(s2.bytes, '999,999,999,999,999')),
 25         '(' || decode(s1.resizeable, 'Yes', 'Resizeable', 'Fixed') || ')'
 26* from v$sgainfo s1, v$sgainfo s2 where s1.name = 'Buffer Cache Size' and s2.name = 'Maximum SGA Size'

and FTS goes by DRP:

Screen Shot 2016-01-03 at 00.41.24

what_happens_when_table_does_not_fet_to_keep_pool

Oracle X$ Tables

Oracle X$ Tables published on 2 комментария к записи Oracle X$ Tables

original at yong321.freeshell.org

X$ Tables


Oracle X$ Tables

Updated to Oracle 12.1.0.2. The X$ tables not included here are too obvious, too obscure, or too uninteresting.

Table Name Guessed Acronym Comments
x$activeckpt active checkpoint Ckpt_type 2 for MR checkpoint (Ref), 3 for interval (Ref) or thread checkpoint (Ref), 7 for incremental checkpoint, 10 for object reuse/truncate checkpoint, 11 for object checkpoint (Ref).
x$bh buffer header This table is commonly used to find the object and the file# and block# of its header when there’s high cache buffers chains latch contention: select obj, dbarfil, dbablk from x$bh a, v$latch_children b where a.hladdr = b.addr for the said latch (whose sleeps you think are too high). You can also use this table to see if a specific buffer has too many clones: select dbarfil, dbablk, count(*) from x$bh group by dbarfil, dbablk having count(*) > 2. Note obj column matches dba_objects.data_object_id, not object_id. For performance reason, don’t merge dba_extents with the query of x$bh that has a group by, unless you use in-line view and no_merge hint (see J. Lewis Practical Oracle8i, p.215) The tch column, touch count, records how many times a particular buffer has been accessed. Its flag column is explained by J. Lewis (some unused columns are later used; e.g. bit 29 means plugged_from_foreign_db in 12c); explanation of state, mode and indx can be found in Anjo Kolk’s paper. Tim is time the buffer touch happened (Note 1). Lru_flag is about the buffer’s position on LRU lists (Ref and 136312.1); 2 moved_to_tail, 4 on_auxiliary_list (auxliary LRU), 8 hot_buffer (on hot end of main LRU), and numbers can be added e.g. 6=2+4.
x$ckptbuf checkpoint buffer (queue) Lists the buffers on the checkpoint queue. Immediately after a full checkpoint, the buffers with non-zero buf_ptr and buf_dbablk should go down.
x$dbgalertext debug alert extented One use is to find old alert.log text long after you recycled the physical file: select originating_timestamp, message_text from x$dbgalertext. The message_id and message_group columns are also interesting and are not available in alert.log.
x$dbglogext debug log extended 12c
x$dbgricx, x$dbgrifx, x$dbgrikx, x$dbgripx debug ? You can quickly summarize what kind of errors the database has had: select error_facility||’-‘||error_number, count(*) from x$dbgricx group by error_facility||’-‘||error_number order by 2, and optionally restrict to a certain time range. You can of course summarize on a more granular level, such as (e.g.) shared pool vs large pool on error_arg2 in case of ORA-4031. You can of course find records of these errors in (undocumented) v$diag_incident or v$diag_diagv_incident. In any case, you may find this easier than grep alert.log. For each incident, its session info is in x$dbgrikx.
x$dbkece debug kernel error, critical error Base table of undocumented v$diag_critical_error but includes facility dbge (Diagnostic Data Extractor or dde)
x$dbkefefc debug kernel error, fatal error flood control Rules for flood control on too many fatal errors.
x$dglparam data guard logical parameters Base table of dba_logstdby_parameters but includes invisible parameters.
x$diag_alert_ext diagnostics alert extended Base table of v$diag_alert_ext. Same as x$dbgalertext but has more lines, slower to query
x$diag_hm_run, x$diag_vhm_run diagnostics health monitor runs Base table of undocumented v$diag_(v)hm_run. Health monitor job records. Maybe complementary to v$hm_run?
x$diag_ips_configuration diagnostics incident packaging service configuration Base table of v$diag_ips_configuration. Some ADR IPS related config info. Like a few other v$diag* (or x$diag*) tables, some columns such as adr_home, name, can’t be exactly matched as if there’re trailing characters. CTAS to create a regular table against which you query, or use subquery factoring with /*+materialize*/ hint.
x$dnfs_meta dNFS metadata Some metadata related to dNFS, SGA memory, message timeout, ping timeout, etc.
x$dra_failure data recovery advisor failures DRA failure names and descriptions.
x$drm_history, x$drm_history_stats dynamic remastering history, stats History of RAC DRM and stats. Parent_key is object_id. If an object is remastered to another node (new_master) too frequently, consider partitioning the app sessions. In 12.1.0.2, there’s also x$drm_wait_stats.
x$jskjobq job scheduling ?, job queue Internal job queue. Job_oid is object_id in dba_objects. If you must query this table, exit the session as soon as you’re done with your work because your session after the query holds an exclusive JS lock, which will block CJQ process! Rollback or commit won’t release the lock.
x$k2gte,
x$k2gte2
kernel 2-phase commit, global transaction entry See Note:104420.1. Find sessions coming from or going to a remote database; in short, x$k2gte.k2gtdses matches v$session.saddr, .k2gtdxcb matches v$transaction.addr.

select /*+ ordered */
substr(s.ksusemnm,1,10)||’-‘|| substr(s.ksusepid,1,10) origin,
substr(g.k2gtitid_ora,1,35) gtxid,
substr(s.indx,1,4)||’.’|| substr(s.ksuseser,1,5) lsession,
s.ksuudlna username,
substr(decode(bitand(ksuseidl,11), 1,’ACTIVE’, 0, decode( bitand(ksuseflg,4096) , 0,’INACTIVE’,’CACHED’),
2,’SNIPED’, 3,’SNIPED’, ‘KILLED’),1,1) status,
e.kslednam waiting
from x$k2gte g, x$ktcxb t, x$ksuse s, x$ksled e
where g.k2gtdxcb=t.ktcxbxba
and g.k2gtdses=t.ktcxbses
and s.addr=g.k2gtdses
and e.indx=s.ksuseopc;

It’s Continue reading Oracle X$ Tables

Primary Sidebar

Яндекс.Метрика