Skip to content

Online Move datafiles 12c

Online Move datafiles 12c published on Комментариев к записи Online Move datafiles 12c нет

An Online Move data file operation is not compatible when:
• The data file is an OFFLINE data file
• A concurrent flashback database operation is executing
• A media recovery is completing
• A file shrink operation or tablespace offline/drop operation involving the same file is performing
But it is compatible with:
• Block media recovery
• ALTER TABLESPACE READ ONLY or READ WRITE operations
• Data file extension operation
• Tablespace/database online backup mode involving the same file

If a flashback database is executed to a time when the file was not yet moved, the flashback database operation will not change the file name to the original name, although it will bring back the old contents of the file.

12c Online Partition maintenance enhancements provide the capability to move table partitions or subpartitions online without preventing concurrent DML operations.
This can be used to:
• Move partitions and subpartitions from one kind of storage to another
• Move time-based partitions and subpartitions to low cost storage once they become infrequently accessed (for example, according to ADO policies configured)
• Compress time-based partitions and subpartitions according to ADO policies configured

Online Move Partition: Benefits

• DML allowed, not DDL
• Move, split, or merge partitions ONLINE to low cost storage
• Global and local indexes maintained
•• Online operation on IOTs is not supported.

Online Move Partition: Compress

MOVE is also used for compression.
• ROW STORE COMPRESS [BASIC]

SQL> ALTER TABLE ORDERS MOVE PARTITION ORD_P1 ROW STORE COMPRESS UPDATE INDEXES ONLINE;

• ROW STORE COMPRESS ADVANCED

SQL> ALTER TABLE ORDERS MOVE PARTITION ORD_P1 ROW STORE COMPRESS ADVANCED UPDATE INDEXES ONLINE;

• COLUMN STORE COMPRESS

SQL> ALTER TABLE ORDERS MOVE PARTITION ORD_P1 COLUMN STORE COMPRESS FOR QUERY HIGH UPDATE INDEXES ONLINE;
SQL> ALTER TABLE ORDERS MOVE PARTITION ORD_P1 COLUMN STORE COMPRESS FOR ARCHIVE HIGH UPDATE INDEXES ONLINE;

Heat Map and Automatic Data Optimization

Heat Map and Automatic Data Optimization published on Комментариев к записи Heat Map and Automatic Data Optimization нет

Heat Map and Automatic Data Optimization

Heat Map and Automatic Data Optimization: Heat Map and Automatic Data Optimization (ADO) can be used to implement your Information Lifecycle Management (ILM) strategy, along with Partitioning, Advanced Compression, and Hybrid Columnar Compression.

usefull links:
*youtube
Manage Automatic Data Optimization (ADO) with Enterprise Manager Cloud Control 12c
Exploring Oracle 12c’s Automatic Database Optimization (ADO) Features:

*oracle by example
Setting Up Compression Tiering for Automatic Data Optimization
Setting Up Storage Tiering Automatic Data Optimization

Screen Shot 2015-12-01 at 20.58.32

1. The first operation for the DBA is to enable Heat Map, tracking the activity on blocks and segments. Heat Map activates system-generated statistics collection, such as segment access and row and segment modification.
2. Real-time statistics are collected in memory (V$HEAT_MAP_SEGMENT view) and regularly flushed by scheduled DBMS_SCHEDULER jobs to the persistent table HEAT_MAP_STAT$. The persistent data is visible by using the DBA_HEAT_MAP_SEG_HISTOGRAM view.
3. The next operation for the DBA is to create ADO policies on segments or groups of segments or as default ADO behavior on tablespaces.
4. The next step for the DBA is to schedule when ADO policy evaluation must happen if the default scheduling does not match the business requirements. ADO policy evaluation relies on Heat Map statistics. MMON evaluates row-level policies periodically and start jobs to compress whichever blocks qualify. Segment-level policies are evaluated and executed only during the maintenance window.
5. The DBA can then view ADO execution results by using the DBA_ILMEVALUATIONDETAILS and DBA_ILMRESULTS views.
6. Finally, the DBA can verify if the segment moved and is therefore stored on the tablespace defined in the ADO policy and or if blocks or the segment got compressed viewing the COMPRESSION_STAT$ table.

Defining Automatic Actions
Screen Shot 2015-12-01 at 21.24.53
define the action and the level of automatic execution:
• The action executed (see possible actions in the first column of the table in the slide):
– Compression and which type of compression
– Data movement to other storage tier
– Both when defining two policies on the same segment
• The possible levels of execution:
– ROW: Row-level ADO policies can only be created based on modification time.
– SEGMENT: Segment-level ADO policies can apply to tables or partitions.
– GROUP: Group-level ADO policies indicate that the table’s SecureFiles LOBs are
compressed as well. Global indexes are maintained. An ADO policy can be specified for a table with the GROUP keyword that is part of the POLICY clause for compression. If the table becomes eligible for an ADO action at any time, the same ADO action would be performed on all the SecureFiles LOBs of the table as well. For example, if the ADO action is compression, the dependent objects like SecureFiles LOBs would be compressed at compression levels corresponding to a default mapping between heap segment compression levels and those of SecureFiles LOBs. Similar semantics hold for the GROUP keyword for ADO policies on table partitions.
– TABLESPACE: A DEFAULT ADO policy defined on a tablespace applies to all segments that will be created in the tablespace.

A few reminders about compression: Compression can occur while data is being inserted, updated, or bulk-loaded into a table.
• ROW STORE COMPRESS BASIC or ADVANCED is used for rows inserted without using direct-path insert and updated rows, using the Advanced Compression option (ACO). ROW STORE COMPRESS ADVANCED is the new syntax used with the “Advanced Row Compression” feature, new name for the old OLTP Table Compression feature part of ACO. ROW STORE COMPRESS ADVANCED on the heap table maps to LOW for SecureFiles LOB segments when the GROUP keyword is used.
• COLUMN STORE COMPRESS FOR QUERY LOW or HIGH provides a higher level of compression than ROW STORE compression. It works well when load performance is critical, frequent queries are run against the table, and no normal DML is expected. Column Store is a feature commonly referred to as Columnar Compression or just Columnar or HCC. COLUMN STORE COMPRESS FOR QUERY LOW/QUERY HIGH on a heap table maps to MEDIUM for SecureFiles LOB segments.
• COLUMN STORE COMPRESS FOR ARCHIVE LOW/ARCHIVE HIGH compression provides the highest level of compression and works well for infrequently accessed data, mostly for read-only data. It enables HCC. COLUMN STORE COMPRESS FOR ARCHIVE LOW/ARCHIVE HIGH on a heap table maps to MEDIUM for SecureFiles LOB segments.

Assumptions
The environment is prepared beforehand; that is, installed an Oracle database 12c non-CDB orcl. The ILM new features are not supported in a multitenant container database (CDB). Any attempt to enable this feature will raise user exceptions.

Flashback CDB 12с

Flashback CDB 12с published on Комментариев к записи Flashback CDB 12с нет

Flashback CDB
Enable flashback:

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> ALTER DATABASE ARCHIVELOG; 
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=2880 SCOPE=BOTH;
SQL> ALTER DATABASE FLASHBACK ON;
SQL> ALTER DATABASE OPEN;

If the CDB is in ARCHIVELOG mode, there is no need to restart it.

Restrictions:
• You cannot flash back the root alone without flashing back the entire CDB.
• Flashback Database operations on a CDB may not be permitted if point-in-time recovery has been performed on any of its PDBs. When point-in-time recovery is performed on a PDB, you cannot directly rewind the CDB to a point that is earlier than the point at which DBPITR for the PDB was performed.

Flashback CDB

A common user is dropped.

1. Flashback CDB: CDB mounted in exclusive mode

SQL> STARTUP MOUNT
SQL> FLASHBACK DATABASE TO SCN 53943;

2. To review changes: Open CDB and PDBs in READ ONLY

SQL> ALTER DATABASE OPEN READ ONLY;
SQL> ALTER PLUGGABLE DATABASE ALL OPEN READ ONLY;

3. To finalize: Flash back again if necessary and open CDB with RESETLOGS.

RMAN> SHUTDOWN IMMEDIATE
RMAN> STARTUP MOUNT
RMAN> FLASHBACK DATABASE TO SCN 10; 
RMAN> ALTER DATABASE OPEN RESETLOGS; 
RMAN> ALTER PLUGGABLE DATABASE ALL OPEN;

A common schema has been accidently dropped in the root. You have to flash back the CDB to the time before the common user was dropped impacting all PDBs.
You can use the RMAN FLASHBACK DATABASE command to execute the Flashback Database operation. You can use SEQUENCE and THREAD to specify a redo log sequence number and thread as a lower limit.
Alternatively, you can use the SQL FLASHBACK DATABASE command to return the database to a past time or SCN. If you use the TO SCN clause, you must provide a number. If you specify TO TIMESTAMP, you must provide a time stamp value. You can also specify a restore point name.
Note
• The CDB must be mounted in exclusive mode to issue the FLASHBACK DATABASE command and opened read-only to review changes. The CDB must be opened read/write with the RESETLOGS option when finished.
• When the CDB is opened in READ ONLY mode, the PDBs are still mounted. Open PDBs in READ ONLY mode, too, to review changes.

*While performing a hot CDB backup, you cannot close any PDB.

Media Failure 12c

Media Failure 12c published on Комментариев к записи Media Failure 12c нет

Media Failure CDB Temporary File Recovery

• Automatic re-creation of temporary files at CDB opening
• Manual re-creation also possible
The CDB instance can start up with a missing temporary file. If any of the temporary files do not exist when the CDB instance is started, they are created automatically and the CDB opens normally. When this happens, a message like the following appears in the alert log during startup:

 Re-creating the temp file /u01/app/oracle/oradata/CDB1/temp01.dbf

You can decide a manual recreation instead, while connected to root:

SQL> ALTER TABLESPACE temp ADD TEMPFILE '/u01/app/oracle/oradata/CDB1/temp02.dbf' SIZE 20M;
SQL> ALTER TABLESPACE temp DROP TEMPFILE  '/u01/app/oracle/oradata/CDB1/temp01.dbf‘;

Media Failure: PDB Temporary File Recovery

• Automatic re-creation of temporary files at PDB opening
• Manual re-creation also possible
You can perform a manual re-creation instead, while connected to the PDB:

SQL> ALTER TABLESPACE temp ADD TEMPFILE  '/u01/app/oracle/oradata/CDB1/HR_PDB/temp2_02.dbf'  SIZE 20M;
SQL> ALTER TABLESPACE temp DROP TEMPFILE  '/u01/app/oracle/oradata/CDB1/HR_PDB/temp2_01.dbf';

Media Failure: Control File Loss

Similar to non-CDBs: CDB mounted

RMAN>CONNECT TARGET /
RMAN>STARTUP NOMOUNT;
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP; RMAN>ALTER DATABASE MOUNT;
RMAN>RECOVER DATABASE;
RMAN>ALTER DATABASE OPEN RESETLOGS; RMAN> ALTER PLUGGABLE DATABASE ALL OPEN;

Media Failure: Redo Log File Loss

Missing or corrupted current redo log files, because there is only one redo stream for the CDB instance (or one redo stream for each instance of a RAC CDB), require a whole CDB media recovery .
Depending on whether a whole redo log group or only a redo log member is missing, follow the same procedures as those for non-CDBs.

Screen Shot 2015-12-01 at 18.36.41

Media Failure: Root SYSTEM or UNDO Data File

Similar to non-CDBs: CDB mounted

RMAN>STARTUP MOUNT;
RMAN> RESTORE TABLESPACE undo1;
RMAN> RECOVER TABLESPACE undo1;
RMAN>ALTER DATABASE OPEN;
RMAN> ALTER PLUGGABLE DATABASE ALL OPEN;

Media Failure: Root SYSAUX Data File

Similar to non-CDBs: tablespace OFFLINE

RMAN> ALTER TABLESPACE sysaux OFFLINE IMMEDIATE; RMAN> RESTORE TABLESPACE sysaux;
RMAN> RECOVER TABLESPACE sysaux;
RMAN> ALTER TABLESPACE sysaux ONLINE;

Media Failure: PDB Data File

Similar to non-CDBs: Perform the recovery within the PDB
• Connect to the PDB.
• Put the tablespace OFFLINE.
• Other PDBs are not impacted.

SQL> CONNECT system@sales_pdb
SQL> ALTER TABLESPACE tbs2 OFFLINE IMMEDIATE; 
RMAN> CONNECT TARGET /
RMAN> RESTORE TABLESPACE sales_pdb:tbs2;
RMAN> RECOVER TABLESPACE sales_pdb:tbs2;
SQL> ALTER TABLESPACE tbs2 ONLINE;

Media Failure: PITR

• PDB PITR

RMAN> ALTER PLUGGABLE DATABASE PDB1 CLOSE; 
RMAN> RUN {
SET UNTIL SCN = 1851648 ; 
RESTORE pluggable DATABASE pdb1; 
RECOVER pluggable DATABASE pdb1
AUXILIARY DESTINATION='/u01/app/oracle/oradata';
ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;
}

• PDB Tablespace PITR

RMAN> RECOVER TABLESPACE PDB1:TEST_TBS UNTIL SCN 832972
AUXILIARY DESTINATION '/tmp/CDB1/reco'; 
RMAN> ALTER TABLESPACE PDB1:TEST_TBS ONLINE;

set the operating system user for a PDB

set the operating system user for a PDB published on Комментариев к записи set the operating system user for a PDB нет

To set the operating system user for a PDB:
1. Log in to the root as a user and execute the DBMS_CREDENTIAL.CREATE_CREDENTIAL procedure to create an Oracle credential for the $user operating system user:

begin
		 DBMS_CREDENTIAL.CREATE_CREDENTIAL (credential_name => 'CDB1_PDBSW_BUSH'
              , username =>'bush'
              , password => 'passwd');
end;
/

PL/SQL procedure successfully completed.

2. Connect to the PDB to set the PDB_OS_CREDENTIAL to the new credential:

11:18:46 (1)[CDB$ROOT]sys@orcl> alter session set container=SW;
11:19:00 (1)[SW]sys@orcl>  ALTER SYSTEM SET PDB_OS_CREDENTIAL=CDB1_PDBSW_BUSH SCOPE=spfile;

3. Restart the CDB instance.
4. Test the connection as bush:

     $ sqlplus sys@pdb1 as sysdba 

not working =(
ps:
PDB_OS_CREDENTIAL
Undocumented
Per feedback by the Multitenant team:
Not functional in Oracle 12.1.0.2
May be functional with a future PSU allwoing then OS user verfication/validation for PDBs

how to find out patches which are applied on oracle 12c database

how to find out patches which are applied on oracle 12c database published on 1 комментарий к записи how to find out patches which are applied on oracle 12c database
13:01:46 (1)[CDB$ROOT]sys@orcl> select status,DESCRIPTION from dba_registry_sqlpatch;

STATUS                                        | DESCRIPTION
--------------------------------------------- | --------------------------------------------------
SUCCESS                                       | Database Patch Set Update : 12.1.0.2.5 (21359755)

original

Primary Sidebar

Яндекс.Метрика