Tuesday, May 23, 2017

How to export the sql_profile from one database to another

First, i had to create the TEST table in target in order to compare the result post sql_profile transfer.




In the Source database


How to export the sql_profile from one instance to another

oracle@test ~/dba/scripts/REFRESH> exp tables=test indexes=no

Export: Release 11.2.0.3.0 - Production on Tue May 23 03:15:10 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Username: sys as sysdba
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in UTF8 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table                           TEST          1 rows exported
Export terminated successfully without warnings.



In the target instance


oracle@test1 /tmp> imp file=expdat.dmp tables=test

Import: Release 11.2.0.3.0 - Production on Tue May 23 03:17:50 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Export file created by EXPORT:V11.02.00 via conventional path
import done in UTF8 character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. . importing table                         "TEST"          1 rows imported
Import terminated successfully without warnings.

So we have confirmed that there are no indexes in the target database

Also we have confirmed that there are no sql profiles present in the target instance

SYS@test1 AS SYSDBA> select name, created, status from dba_sql_profiles;

no rows selected

SYS@test1 AS SYSDBA>

We have confirmed that in the target instance there is no sql_profile present at this moment.



Now we will  import the sql profile from source to target

We will create a staging table. This is a table which we will create using dbms_sqltune package

In the target instance, we have the sql profile.

SQLPROFILE_02.


Note:We can't create a staging table under sys schema. I have created it under system

oracle@test /tmp> sqlplus system

SQL*Plus: Release 11.2.0.3.0 Production on Tue May 23 04:31:27 2017

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYSTEM@sam > EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => 'SQL_PROF_STGTAB');

PL/SQL procedure successfully completed.



Copy sql profile into staging table

This step is also called Packing the SQL Profiles into the staging tables. The SQL Profiles are copied from the data dictionary into the staging table, in order to be ready for transport.

SYSTEM@sam > EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (staging_table_name => 'SQL_PROF_STGTAB',profile_name=>'SQLPROFILE_02');

PL/SQL procedure successfully completed.

SYSTEM@sam >


Now we need to move this staging table from source database to target database

SYS@sam AS SYSDBA> CREATE OR REPLACE DIRECTORY test_dir AS '/tmp/tmp';

Directory created.

SYS@sam AS SYSDBA> GRANT READ, WRITE ON DIRECTORY test_dir TO system;

Grant succeeded.

SYS@sam AS SYSDBA> exit;
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@test /tmp> expdp system/** tables=SQL_PROF_STGTAB directory=TEST_DIR dumpfile=stage1.dmp logfile=exp.log compression=all

Export: Release 11.2.0.3.0 - Production on Tue May 23 07:11:06 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=SQL_PROF_STGTAB directory=TEST_DIR dumpfile=stage1.dmp logfile=exp.log compression=all
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/PRE_TABLE_ACTION
Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ
. . exported "SYSTEM"."SQL_PROF_STGTAB"                  7.695 KB       1 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /tmp/tmp/stage1.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 07:14:58



Scp the stage.dmp file to the target database directory


now lets check the explain plan for the query which we will be using in target instance


With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS@test1 AS SYSDBA> explain plan for select * from test where w='sam';

Explained.

SYS@test1 AS SYSDBA> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    16 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |    16 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("W"='sam')

Note
-----
   - dynamic sampling used for this statement (level=2)

17 rows selected.

This is obvious result. Now once we import the staging table(stage.dmp) into the data dictionary, the query should start using the sql profile(taken out from source database).

oracle@test1 /oracle/stage/DUMP_DIR_donotdelete/SIT> impdp system/*** tables=SQL_PROF_STGTAB dumpfile=stage1.dmp

Import: Release 11.2.0.3.0 - Production on Tue May 23 07:19:39 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TABLE_01":  system/******** tables=SQL_PROF_STGTAB dumpfile=stage1.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SYSTEM"."SQL_PROF_STGTAB"                  7.695 KB       1 rows
Job "SYSTEM"."SYS_IMPORT_TABLE_01" successfully completed at 07:19:41

oracle@test1 /oracle/stage/DUMP_DIR_donotdelete/SIT>


Now we need to unpack the staging table into data dictionary. This should be done in system user or any user having proper privige


EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF (REPLACE => TRUE,staging_table_name => 'SQL_PROF_STGTAB');

SYSTEM@test1 >  EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF (REPLACE => TRUE,staging_table_name => 'SQL_PROF_STGTAB');

PL/SQL procedure successfully completed.



Now lets check the execution plan again.


SYS@test1 AS SYSDBA> explain plan for select * from test where w='sam';

Explained.

SYS@test1 AS SYSDBA> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    16 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |    16 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("W"='sam')

Note
-----
   - SQL profile "SQLPROFILE_02" used for this statement

17 rows selected.

hmm, so index was not present. I had to create it.


SYS@test1 AS SYSDBA> create index test1 on test(w);

Index created.

SYS@test1 AS SYSDBA>  select owner,index_name from dba_indexes where table_name='TEST';

OWNER                          INDEX_NAME
------------------------------ ------------------------------
SYS                            TEST1

SYS@test1 AS SYSDBA> explain plan for select * from test where w='sam';

Explained.

SYS@test1 AS SYSDBA> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3981825257

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    16 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  |     1 |    16 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("W"='sam')

Note
-----
   - SQL profile "SQLPROFILE_02" used for this statement

18 rows selected.



Happy learning.



















Monday, May 22, 2017

Manual creation of a SQL Profile



Query being used is-


select * from test where w='sam';

explain plan for select * from test where w='sam';

Explained


select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    16 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |    16 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("W"='sam')

Note
-----
   - dynamic sampling used for this statement (level=2)

17 rows selected.

SYS@SAM AS SYSDBA>


Now we will try to use hint to force the optimizer to choose index by taking the help of sql_profile

SYS@SAM AS SYSDBA> explain plan for select /*+INDEX (TEST) */ * from test where w='sam';

Explained.

SYS@SAM AS SYSDBA> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3981825257

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    16 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  |     1 |    16 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("W"='sam')

Note
-----
   - dynamic sampling used for this statement (level=2)

18 rows selected.

SYS@SAM AS SYSDBA>


Yes, so using the hint, the optimizer is actually using the index. Now we will create the sql profile


Original plan


SYS@SAM AS SYSDBA> explain plan for select * from test where w='sam';

Explained.

SYS@SAM AS SYSDBA> select * from table(dbms_xplan.display(format=>'ADVANCED'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    16 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |    16 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / TEST@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"SEL$1" "TEST"@"SEL$1")
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      OPT_PARAM('_gby_hash_aggregation_enabled' 'false')
      OPT_PARAM('_fast_full_scan_enabled' 'false')
      OPT_PARAM('_b_tree_bitmap_plans' 'false')
      DB_VERSION('11.2.0.3')
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("W"='sam')

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "TEST"."N"[NUMBER,22], "W"[VARCHAR2,3]

Note
-----
   - dynamic sampling used for this statement (level=2)

44 rows selected.




Modified plan or the one which we are telling the optimizer to choose

SYS@SAM AS SYSDBA> explain plan for select /*+INDEX (TEST) */ * from test where w='sam';

Explained.

SYS@SAM AS SYSDBA> select * from table(dbms_xplan.display(format=>'ADVANCED'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3981825257

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    16 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  |     1 |    16 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / TEST@SEL$1
   2 - SEL$1 / TEST@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      INDEX_RS_ASC(@"SEL$1" "TEST"@"SEL$1" ("TEST"."W"))
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      OPT_PARAM('_gby_hash_aggregation_enabled' 'false')
      OPT_PARAM('_fast_full_scan_enabled' 'false')
      OPT_PARAM('_b_tree_bitmap_plans' 'false')
      DB_VERSION('11.2.0.3')
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("W"='sam')

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "TEST"."N"[NUMBER,22], "W"[VARCHAR2,3]
   2 - "TEST".ROWID[ROWID,10], "W"[VARCHAR2,3]

Note
-----
   - dynamic sampling used for this statement (level=2)

47 rows selected.



Now we will create the sql profile

SYS@SAM AS SYSDBA> DECLARE
     l_sql               clob;
     BEGIN
     l_sql := q'!select * from test where w='sam'!';
     dbms_sqltune.import_sql_profile( sql_text => l_sql,
                                     name => 'SQLPROFILE_02',
                                     profile => sqlprof_attr(q'!INDEX_RS_ASC(@"SEL$1" "TEST"@"SEL$1" ("TEST"."W"))!',
             q'!OUTLINE_LEAF(@"SEL$1")!',
             q'!ALL_ROWS!',
             q'!OPT_PARAM('_gby_hash_aggregation_enabled' 'false')!',
             q'!OPT_PARAM('_fast_full_scan_enabled' 'false')!',
             q'!OPT_PARAM('_b_tree_bitmap_plans' 'false')!',
             q'!DB_VERSION('11.2.0.3')!',
             q'!OPTIMIZER_FEATURES_ENABLE('11.2.0.3')!',
             q'!IGNORE_OPTIM_EMBEDDED_HINTS!'),
             force_match => true );
     end;
     /

PL/SQL procedure successfully completed.



Once the procedure is executed, the profile is loaded and enabled.

Now that we have loaded the SQL profile, let’s get the explain plan again and make sure that the SQL profile works:

SYS@SAM AS SYSDBA> explain plan for select * from test where w='sam';

Explained.

SYS@SAM AS SYSDBA> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3981825257

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    16 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  |     1 |    16 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("W"='sam')

Note
-----
   - SQL profile "SQLPROFILE_02" used for this statement

18 rows selected.

SYS@SAM AS SYSDBA>



to list of sql profiles, use the below query

SYS@SAM AS SYSDBA>  select name, created, status from dba_sql_profiles;

NAME
------------------------------
CREATED                                                                     STATUS
--------------------------------------------------------------------------- --------
SQLPROFILE_02
22-MAY-17 06.10.14.000000 AM                                                ENABLED





Should we decide that the sql profile doesn't work well as expected, we can drop or disable it


begin dbms_sqltune.alter_sql_profile('SQLPROFILE_02','STATUS','DISABLED'); end;
/

SYS@SAM AS SYSDBA> begin dbms_sqltune.alter_sql_profile('SQLPROFILE_02','STATUS','DISABLED'); end;
  2  /

PL/SQL procedure successfully completed.

SYS@SAM AS SYSDBA>



SYS@SAM AS SYSDBA> begin dbms_sqltune.drop_sql_profile('SQLPROFILE_02'); end;
  2  /

PL/SQL procedure successfully completed.



SYS@SAM AS SYSDBA> select name, created, status from dba_sql_profiles;


no rows selected












How to Use SQL tuning advisor manually without using OEM

SYS@SAM AS SYSDBA> create table test(n number,w varchar2(3));

Table created.

SYS@SAM AS SYSDBA> insert into test values(1,'sam');

1 row created.

SYS@SAM AS SYSDBA> commit;

Commit complete.

SYS@SAM AS SYSDBA> create index test1 on test(w);

Index created.

SYS@SAM AS SYSDBA> select * from test where w='sam';

         N W
---------- ---
         1 sam

SYS@SAM AS SYSDBA>  select * from test where lower(w)='sam';

         N W
---------- ---
         1 sam



SYS@SAM AS SYSDBA> declare
    tempstring  varchar2(300);
    task_id   varchar2(200);
    begin
       tempstring := ' select * from test where w=''sam''';
       task_id := dbms_sqltune.create_tuning_task(sql_text => tempstring, task_name=>'SQLTUNE2');
  2    3    4    5    6    7         dbms_sqltune.execute_tuning_task('SQLTUNE2');
    end;
/  8    9

PL/SQL procedure successfully completed.

Now let's check the recommendations from Tuning advisor(Note:We should have valid license from Oracle)

SYS@SAM AS SYSDBA> set long 5000
SYS@SAM AS SYSDBA> select dbms_sqltune.report_tuning_task('SQLTUNE2') from dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('SQLTUNE2')
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
----------------------------------------------------------------------------
---
Tuning Task Name   : SQLTUNE2
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status  : COMPLETED
Started at         : 05/22/2017 01:47:35
Completed at       : 05/22/2017 01:47:38

--------------------------------------------------------------
-----------------
Schema Name: SYS
SQL ID     : 14vwm534mggft
SQL Text   :  select * from test where w='sam'

------------------------------------------------------------------------
-------
FINDINGS SECTION (2 findings)
------------------------------------------------------------------
-------------

1- Statistics Finding
---------------------
  Table "SYS"."TEST" was not analyzed.

  Recommendation
  --------------
  - Consider collecting optimizer statistics for this table.
    execute dbms_stats.gather_table_stats(ownname => 'SYS', tabnam
e =>
            'TEST', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
            method_opt => 'FOR ALL COLUMNS SIZE AUTO');

  Rationale
  ---------
    The optimizer requires up-to-date statistics for
the table in order to
    select a good execution plan.

2- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this statem
ent.

  Recommendation (estimated benefit: 33.55%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name => 'SQ
LTUNE2',
            task_owner => 'SYS', replace => TRUE);

  Validation results
  ------------------
  The SQL profile was tested by executing both its pl
an and the original plan
  and measuring their respective execution statistics
. A plan may have been
  only partially executed if the other could be run to
completion in less time.

                           Original Plan  With SQL P
rofile  % Improved
                           -------------  ----------------
 ----------
  Completion Status:            COMPLETE          COMPLETE
  Elapsed Time (s):             .000054           .000048
    11.11 %
  CPU Time (s):                   .0001                 0        1
00 %
  User I/O Time (s):                  0                 0
  Buffer Gets:                        3                 2      33.
33 %
  Physical Read Requests:             0                 0
  Physical Write Requests:            0                 0
  Physical Read Bytes:                0                 0
  Physical Write Bytes:               0
    0
  Rows Processed:                     1                 1
  Fetches:                            1                 1
  Executions:                         1                 1


  Notes
  -----
  1. Statistics for the original plan were averaged over 10
 executions.
  2. Statistics for the SQL profile plan were averaged over 10 ex
ecutions.

-------------------------------------------------------------------
------------
EXPLAIN PLANS SECTION
---------------------------------------------------------------------
----------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 1357081020

-----------------------------------------------------
---------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost
(%CPU)| Time     |
-----------------------------------------------------------
---------------
|   0 | SELECT STATEMENT  |      |     1 |     5 |     2   (0)
| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |     5 |     2   (0)| 0
0:00:01 |
--------------------------------------------------------------------
------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("W"='sam')

2- Using SQL Profile
--------------------
Plan hash value: 3981825257

--------------------------------------------------------------------------
-----------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost
 (%CPU)| Time     |
----------------------------------------------------------
---------------------------
|   0 | SELECT STATEMENT            |       |     1 |     5 |     3  (34)| 0
0:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST  |     1 |     5 |     3
 (34)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TEST1 |     1 |
|     2  (50)| 00:00:01 |
----------------------------------------------------
---------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("W"='sam')

------------------------------------------------------------------
------------------------------------------------------------------

**************************************************************************

Another small test case


manually executing sql tuning advisor for better execution plan


SYS@TEST AS SYSDBA> grant select on apps.fnd_concurrent_requests to scott;

Grant succeeded.

SYS@TEST AS SYSDBA> exit;
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@fdlxebsdvdb1 ~> sqlplus scott/scott

SQL*Plus: Release 11.2.0.3.0 Production on Thu Feb 22 03:51:04 2018

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SCOTT@TEST > create table test1 as select * from apps.fnd_concurrent_requests;                                                                                      
Table created.

SCOTT@TEST > create table test2 as select * from apps.fnd_concurrent_requests where rownum < 1000;

Table created.

SCOTT@TEST >


select a.LAST_UPDATE_DATE,b.PHASE_CODE,b.STATUS_CODE from test1 a,test2 b where a.request_id=b.request_id;

gather the stats

 exec dbms_stats.gather_schema_stats('SCOTT');

PL/SQL procedure successfully completed.

SCOTT@TEST >



login as sysdba

select /*+ gather_plan_statistics */ a.LAST_UPDATE_DATE,b.PHASE_CODE,b.STATUS_CODE from scott.test1 a,scott.test2 b where a.request_id=b.request_id;

Execution Plan
----------------------------------------------------------
Plan hash value: 3916441650

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |  1004 | 24096 |  1903   (3)| 00:00:04 |
|*  1 |  HASH JOIN         |       |  1004 | 24096 |  1903   (3)| 00:00:04 |
|   2 |   TABLE ACCESS FULL| TEST2 |   999 |  9990 |     9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| TEST1 |   236K|  3238K|  1888   (3)| 00:00:04 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."REQUEST_ID"="B"."REQUEST_ID")


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      18407  consistent gets
         96  physical reads
          0  redo size
      25574  bytes sent via SQL*Net to client
       1246  bytes received via SQL*Net from client
         68  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        999  rows processed


full table scan is happening.

lets create a index

create index test1_i on test1(request_id);

SYS@TEST AS SYSDBA> create index test1_i on scott.test1(request_id);

Index created.

SYS@TEST AS SYSDBA>


run gather stats again

SYS@TEST AS SYSDBA> exec dbms_stats.gather_schema_stats('SCOTT');

PL/SQL procedure successfully completed.

 set autotrace on;
SYS@TEST AS SYSDBA> select /*+ gather_plan_statistics */ a.LAST_UPDATE_DATE,b.PHASE_CODE,b.STATUS_CODE from scott.test1 a,scott.test2 b where a.request_id=b.request_id;


----------------------------------------------------------
Plan hash value: 3916441650

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |  1004 | 24096 |  1903   (3)| 00:00:04 |
|*  1 |  HASH JOIN         |       |  1004 | 24096 |  1903   (3)| 00:00:04 |
|   2 |   TABLE ACCESS FULL| TEST2 |   999 |  9990 |     9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| TEST1 |   236K|  3238K|  1888   (3)| 00:00:04 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."REQUEST_ID"="B"."REQUEST_ID")


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      18407  consistent gets
          0  physical reads
          0  redo size
      25574  bytes sent via SQL*Net to client
       1246  bytes received via SQL*Net from client
         68  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        999  rows processed


lets create the index on test 2 also

create index test2_i on scott.test2(request_id);

SYS@TEST AS SYSDBA> create index test2_i on scott.test2(request_id);

Index created.

SYS@TEST AS SYSDBA>  exec dbms_stats.gather_schema_stats('SCOTT');

PL/SQL procedure successfully completed.

no change happened

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |  1004 | 24096 |  1903   (3)| 00:00:04 |
|*  1 |  HASH JOIN         |       |  1004 | 24096 |  1903   (3)| 00:00:04 |
|   2 |   TABLE ACCESS FULL| TEST2 |   999 |  9990 |     9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| TEST1 |   236K|  3238K|  1888   (3)| 00:00:04 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."REQUEST_ID"="B"."REQUEST_ID")


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      18407  consistent gets
          0  physical reads
          0  redo size
      25574  bytes sent via SQL*Net to client
       1246  bytes received via SQL*Net from client
         68  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        999  rows processed

SYS@TEST AS SYSDBA>



lets create a tuning task

 SYS@TEST AS SYSDBA> declare
    tempstring  varchar2(32767);
    task_id   varchar2(32767);
    begin
       tempstring := 'select a.LAST_UPDATE_DATE,b.PHASE_CODE,b.STATUS_CODE from scott.test1 a,scott.test2 b where a.request_id=b.request_id';
       task_id := dbms_sqltune.create_tuning_task(sql_text => tempstring, task_name=>'SQLTUNE2');
        dbms_sqltune.execute_tuning_task('SQLTUNE2');
end;  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.


SYS@TEST AS SYSDBA> set long 5000
SYS@TEST AS SYSDBA> select dbms_sqltune.report_tuning_task('SQLTUNE2') from dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('SQLTUNE2')
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
----------------------------------------------------------------------------
---
Tuning Task Name   : SQLTUNE2
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status  : COMPLETED
Started at         : 02/22/2018 05:37:38
Completed at       : 02/22/2018 05:37:41

--------------------------------------------------------------
-----------------
Schema Name: SYS
SQL ID     : 40gbu4v53uscg
SQL Text   : select a.LAST_UPDATE_DATE,b.PHASE_CODE,b.STATUS_CODE fr
om
             scott.test1 a,scott.test2 b where a.request_id=b.request_id

-----------------------------------------------------
--------------------------
FINDINGS SECTION (1 finding)
--------------------------------------------------------------------------
-----

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this stat
ement.

  Recommendation (estimated benefit: 95.13%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name => '
SQLTUNE2',
            task_owner => 'SYS', replace => TRUE);

  Validation results
  ------------------
  The SQL profile was tested by executing both its plan and the original plan

  and measuring their respective execution statistics. A plan may have been
  only partially executed if the other could be run t
o completion in less time.

                           Original Plan  With SQL Profile  % Improved
                           -------------  ---------------
-  ----------
  Completion Status:            COMPLETE          COMPLETE
  Elapsed Time (s):             .072423           .003432
      95.26 %
  CPU Time (s):                 .072589           .003499      9
5.17 %
  User I/O Time (s):                  0                 0
  Buffer Gets:                    18345               893      9
5.13 %
  Physical Read Requests:             0                 0
  Physical Write Requests:            0                 0
  Physical Read Bytes:                0                 0

  Physical Write Bytes:               0                 0
  Rows Processed:                   999               999
  Fetches:                          999               999
  Executions:                         1
 1

  Notes
  -----
  1. Statistics for the original plan were averaged over
10 executions.
  2. Statistics for the SQL profile plan were averaged over 10
executions.

-----------------------------------------------------------------
--------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------
------------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 3916441650

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes |
 Cost (%CPU)| Time     |
-----------------------------------------------------
-----------------------
|   0 | SELECT STATEMENT   |       |  2077 | 49848 |
1903   (3)| 00:00:04 |
|*  1 |  HASH JOIN         |       |  2077 | 49848 |  1
903   (3)| 00:00:04 |
|   2 |   TABLE ACCESS FULL| TEST2 |   999 |  9990 |
 9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| TEST1 |   236K|  3238K|  188
8   (3)| 00:00:04 |
----------------------------------------------------------
------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."REQUEST_ID"="B"."REQUEST_ID")

2- Using SQL Profile
--------------------
Plan hash value: 2712629061

----------------------------------------------------------------
-----------------------
| Id  | Operation                   | Name    | Rows
| Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------
-----------------
|   0 | SELECT STATEMENT            |         |  2077 | 4984
8 |  2019   (1)| 00:00:05 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST1   |     2 |    28 |     3  (34)|
 00:00:01 |
|   2 |   NESTED LOOPS              |         |  2077 | 49848 |  2
019   (1)| 00:00:05 |
|   3 |    TABLE ACCESS FULL        | TEST2   |   999 |
 9990 |    16   (7)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN         | TEST1_I |     2 |       |     2  (
50)| 00:00:01 |
--------------------------------------------------------------
-------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("A"."REQUEST_ID"="B"."REQUEST_ID")

----------------------------------------------------------------------------
---




Execution Plan
----------------------------------------------------------
Plan hash value: 1388734953

-----------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------


Statistics
----------------------------------------------------------
       2547  recursive calls
        985  db block gets
       3793  consistent gets
         40  physical reads
        780  redo size
      45402  bytes sent via SQL*Net to client
      39769  bytes received via SQL*Net from client
        189  SQL*Net roundtrips to/from client
        129  sorts (memory)
          0  sorts (disk)
          1  rows processed

SYS@TEST AS SYSDBA>

SYS@TEST AS SYSDBA> set long 5000
SYS@TEST AS SYSDBA> select dbms_sqltune.report_tuning_task('SQLTUNE2') from dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('SQLTUNE2')
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
----------------------------------------------------------------------------
---
Tuning Task Name   : SQLTUNE2
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status  : COMPLETED
Started at         : 02/22/2018 05:37:38
Completed at       : 02/22/2018 05:37:41

--------------------------------------------------------------
-----------------
Schema Name: SYS
SQL ID     : 40gbu4v53uscg
SQL Text   : select a.LAST_UPDATE_DATE,b.PHASE_CODE,b.STATUS_CODE fr
om
             scott.test1 a,scott.test2 b where a.request_id=b.request_id

-----------------------------------------------------
--------------------------
FINDINGS SECTION (1 finding)
--------------------------------------------------------------------------
-----

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this stat
ement.

  Recommendation (estimated benefit: 95.13%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name => '
SQLTUNE2',
            task_owner => 'SYS', replace => TRUE);

  Validation results
  ------------------
  The SQL profile was tested by executing both its plan and the original plan

  and measuring their respective execution statistics. A plan may have been
  only partially executed if the other could be run t
o completion in less time.

                           Original Plan  With SQL Profile  % Improved
                           -------------  ---------------
-  ----------
  Completion Status:            COMPLETE          COMPLETE
  Elapsed Time (s):             .072423           .003432
      95.26 %
  CPU Time (s):                 .072589           .003499      9
5.17 %
  User I/O Time (s):                  0                 0
  Buffer Gets:                    18345               893      9
5.13 %
  Physical Read Requests:             0                 0
  Physical Write Requests:            0                 0
  Physical Read Bytes:                0                 0

  Physical Write Bytes:               0                 0
  Rows Processed:                   999               999
  Fetches:                          999               999
  Executions:                         1
 1

  Notes
  -----
  1. Statistics for the original plan were averaged over
10 executions.
  2. Statistics for the SQL profile plan were averaged over 10
executions.

-----------------------------------------------------------------
--------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------
------------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 3916441650

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes |
 Cost (%CPU)| Time     |
-----------------------------------------------------
-----------------------
|   0 | SELECT STATEMENT   |       |  2077 | 49848 |
1903   (3)| 00:00:04 |
|*  1 |  HASH JOIN         |       |  2077 | 49848 |  1
903   (3)| 00:00:04 |
|   2 |   TABLE ACCESS FULL| TEST2 |   999 |  9990 |
 9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| TEST1 |   236K|  3238K|  188
8   (3)| 00:00:04 |
----------------------------------------------------------
------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."REQUEST_ID"="B"."REQUEST_ID")

2- Using SQL Profile
--------------------
Plan hash value: 2712629061

----------------------------------------------------------------
-----------------------
| Id  | Operation                   | Name    | Rows
| Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------
-----------------
|   0 | SELECT STATEMENT            |         |  2077 | 4984
8 |  2019   (1)| 00:00:05 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST1   |     2 |    28 |     3  (34)|
 00:00:01 |
|   2 |   NESTED LOOPS              |         |  2077 | 49848 |  2
019   (1)| 00:00:05 |
|   3 |    TABLE ACCESS FULL        | TEST2   |   999 |
 9990 |    16   (7)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN         | TEST1_I |     2 |       |     2  (
50)| 00:00:01 |
--------------------------------------------------------------
-------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("A"."REQUEST_ID"="B"."REQUEST_ID")

----------------------------------------------------------------------------
---




Execution Plan
----------------------------------------------------------
Plan hash value: 1388734953

-----------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------


Statistics
----------------------------------------------------------
       2547  recursive calls
        985  db block gets
       3793  consistent gets
         40  physical reads
        780  redo size
      45402  bytes sent via SQL*Net to client
      39769  bytes received via SQL*Net from client
        189  SQL*Net roundtrips to/from client
        129  sorts (memory)
          0  sorts (disk)
          1  rows processed

SYS@TEST AS SYSDBA>

SYS@TEST AS SYSDBA> execute dbms_sqltune.accept_sql_profile(task_name =>'SQLTUNE2',task_owner => 'SYS', replace => TRUE);

PL/SQL procedure successfully completed.

SYS@TEST AS SYSDBA> exec dbms_stats.gather_schema_stats('SCOTT');

PL/SQL procedure successfully completed.










Monday, May 8, 2017

FAL[server, ARC0]: Error 12541 creating remote archivelog file while setting up a physical standby database for EBS 12.2



While i was working on setting up the physical standby for EBS 12.2, i encountered an error where RFS process were not coming up in the standby database. RFS is the background process and it should be started by default when we configure the dataguard. Apart from RFS, there are other important processes
================

Archiver Process ? The archiver process (ARCn or ARCH) is responsible for archiving online redo logs. The archival destination could be a local destination or a remote standby database site. In the case of a Data Guard configuration, one of the archival destinations must be a standby database.  The archiver process of the primary database writes the redo log file.
For a better data protection mode, the standby redo log files can be configured on the standby database. In this case, the archiver process on the standby site will be used to archive the standby redo log files.

-     Log Writer (LGWR) ? The log writer process on the primary database writes entries from the redo log buffer to the online redo log file. When the current online redo log file is full, it triggers the archiver process to start the archiving activity. In some cases, the log writer process writes redo entries to the online redo log file of the primary database and the standby redo log file of the standby database. Usually, in this kind of arrangement the LGWR works as the log transport agent that is setup to achieve high data protection modes.

-     Remote File Server (RFS) Process ? The RFS process runs on the standby database and is responsible for communication between the primary and the standby database. For the log transport service, the RFS on the standby database receives the redo records from the archiver or the log writer process of the primary database over Oracle Net and writes to filesystem on the standby site.

-     Fetch Archive Log (FAL) ? The FAL process has two components: FAL Client and FAL Server. Both processes are used for archive gap resolution. If the Managed Recovery Process (MRP) on the standby database site detects an archive gap sequence, it initiates a fetch request to the FAL client on the standby site. This action, in turn, requests the FAL server process on the primary database to re-transmit the archived log files to resolve the gap sequence. Archive gap sequences will be discussed later in this chapter.

Once the log transport service completes the transmission of redo records to the standby site, the log apply service starts applying the changes to the standby database. The log apply service operates solely on the standby database. The following processes on the standby site facilitate the log apply operations:

-     Managed Recovery Process (MRP) ? The MRP  applies the redo entries from the archived redo logs onto the physical standby database.

-     Logical Standby Process (LSP) ? The LSP applies the redo records from archived redo logs to the logical standby database. The Oracle database log miner engine is used by the logical standby process for the SQL apply operations. Using the log miner engine, the LSP process recreates the SQL statements from redo logs that have been executed on the primary database. These statements are then applied to the standby database to keep it current with the primary database.

=======================

Now coming back to the error, after configuring the dataguard, i first did some sanity checking using the below query and found that RFS processes are not running.

SQL> select thread#,sequence#,process,client_process,status,blocks from v$managed_standby;

   THREAD#  SEQUENCE# PROCESS   CLIENT_P STATUS           BLOCKS
---------- ---------- --------- -------- ------------ ----------
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         1         99 MRP0      N/A      WAIT_FOR_LOG          0

9 rows selected.


I checked the alert log and found the message

Managed Standby Recovery starting Real Time Apply
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Media Recovery Waiting for thread 1 sequence 99
2017-02-26 21:44:59.709000 -05:00


Hmm, the communication between primary and standby is not yet fully established. So i logged into primary and ran the below queries

SQL> SELECT thread#, dest_id, gvad.status, error, fail_sequence FROM gv$archive_dest gvad, gv$instance gvi WHERE gvad.inst_id = gvi.inst_id AND destination is NOT NULL ORDER BY thread#, dest_id;

   THREAD#    DEST_ID STATUS
---------- ---------- ---------
ERROR                                                             FAIL_SEQUENCE
----------------------------------------------------------------- -------------
         1          1 VALID
                                                                              0

         1          2 ERROR
ORA-12541: TNS:no listener                                                    0


SQL> SELECT gvi.thread#, timestamp, message FROM gv$dataguard_status gvds, gv$instance gvi WHERE gvds.inst_id = gvi.inst_id AND severity in ('Error','Fatal') ORDER BY timestamp, thread#;

   THREAD# TIMESTAMP
---------- ------------------
MESSAGE
--------------------------------------------------------------------------------
         1 24-FEB-17
Error 12541 received logging on to the standby

         1 24-FEB-17
FAL[server, ARC0]: Error 12541 creating remote archivelog file 'PRODDR'



The establishment is not working. Then, i verified the pfile and found some discrepancies in the parameter *.log_archive_dest_2. This was initially pointing to the primary database services. Had to modify it to point to correct values

*.log_archive_dest_2='SERVICE=PRODDR LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRODDR OPTIONAL REOPEN=15 MAX_FAILURE=100 NET_TIMEOUT=300'


Restarted the database and listener. Again i ran the below query

SQL> SELECT gvi.thread#, timestamp, message FROM gv$dataguard_status gvds, gv$instance gvi WHERE gvds.inst_id = gvi.inst_id AND severity in ('Error','Fatal') ORDER BY timestamp, thread#;

no rows selected

SQL> /


No error message reported this time. Logged into standby and ran the query

SQL> select thread#,sequence#,process,client_process,status,blocks from v$managed_standby;

   THREAD#  SEQUENCE# PROCESS   CLIENT_P STATUS           BLOCKS
---------- ---------- --------- -------- ------------ ----------
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         1        104 ARCH      ARCH     CLOSING             605
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         0          0 ARCH      ARCH     CONNECTED             0
         1        105 MRP0      N/A      APPLYING_LOG    2048000
         1        105 RFS       LGWR     IDLE                  1
         0          0 RFS       UNKNOWN  IDLE                  0

   THREAD#  SEQUENCE# PROCESS   CLIENT_P STATUS           BLOCKS
---------- ---------- --------- -------- ------------ ----------
         0          0 RFS       UNKNOWN  IDLE                  0
         0          0 RFS       UNKNOWN  IDLE                  0



All is fine now. RFS processes are now running. We can also verify the same from alert log

RFS[1]: Assigned to RFS process (PID:13064)
RFS[1]: Selected log 3 for thread 1 sequence 105 dbid 345897223 branch 934034088
RFS[2]: Assigned to RFS process (PID:13068)
RFS[2]: Selected log 4 for thread 1 sequence 104 dbid 345897223 branch 934034088
2017-02-26 22:38:36.939000 -05:00
RFS[3]: Assigned to RFS process (PID:13076)
RFS[3]: Opened log for thread 1 sequence 100 dbid 345897223 branch 934034088
RFS[4]: Assigned to RFS process (PID:13080)
RFS[4]: Opened log for thread 1 sequence 99 dbid 345897223 branch 934034088


Reference:RFS Is not coming up on standby Due to ORA-12154 on transport (primary side) (Doc ID 2196182.1)

*******************************************************************
When you are using Dataguard, there are several scenarios when physical standby can go out of sync with the primary database.

Before doing anything to correct the problem, we need to verify that why standby is not in sync with the primary. In this particular article, we are covering the scenario where a log is missing from the standby but apart from the missing log, all logs are available.

Verify from v$archived_log that there is a gap in the sequence number. All the logs up to that gap should have APPLIED=YES and all the sequence# after the missing log sequence# are APPLIED=NO. This means that due to the missing log, MRP is not applying the logs on standby but the logs are still being transmitted to the standby and are available.

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

So for example, if the missing log sequence# is 400, then the above query should show that up to sequence#399, all have APPLIED=YES and starting from 401, all are APPLIED=NO.

There are few steps to be performed when the standby is not in sync with the primary because there is a gap of logs on standby.

These steps are:

STEP #1: Take an incremental backup of primary from the SCN where standby is lagging behind and apply on the standby server

STEP #2: If step#1 is not able to sync up, then re-create the controlfile of standby database from the primary

STEP #3: If after step#2, you still find that logs are not being applied on the standby, check the alert log and you may need to re-register the logs with the standby database.

*******************************************************************************************

STEP#1

1. On STANDBY database query the v$database view and record the current SCN of the standby database:

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
-----------
1.3945E+10

SQL> SELECT to_char(CURRENT_SCN) FROM V$DATABASE;
TO_CHAR(CURRENT_SCN)

----------------------------------------
13945141914

2. Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
*
ERROR at line 1:
ORA-16136: Managed Standby Recovery not active
If you see the above error, it means Managed Recovery is already off

You can also confirm from the view v$managed_standby to see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

3. Connect to the primary database as the RMAN target and create an incremental backup from the current SCN of the standby database that was recorded in step 1:

For example,

BACKUP INCREMENTAL FROM SCN 13945141914 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FOR STANDBY'

You can choose a location other than /tmp also.

4. Do a recovery of the standby database using the incremental backup of primary taken above:

On the Standby server, without connecting to recovery catalog, catalog the backupset of the incremental backup taken above. Before this, of course you need to copy the backup piece of the incremental backup taken above to a location accessible to standby server.

$ rman nocatalog target /
RMAN> CATALOG BACKUPPIECE '/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1';

Now in the same session, start the recovery

RMAN> RECOVER DATABASE NOREDO;

You should see something like:

Starting recover at 2015-09-17 04:59:57
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=309 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
....
..
..
.
channel ORA_DISK_1: reading from backup piece /dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1 tag=FOR STANDBY
channel ORA_DISK_1: restore complete, elapsed time: 01:53:08
Finished recover at 2015-07-25 05:20:3

Delete the backup set from standby:

RMAN> DELETE BACKUP TAG 'FOR STANDBY';
using channel ORA_DISK_1
List of Backup Pieces

BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
17713   17713   1   1   AVAILABLE   DISK        /dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1

Do you really want to delete the above objects (enter YES or NO)? YES

deleted backup piece

backup piece handle=/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1 recid=17713 stamp=660972421

Deleted 1 objects

5. Try to start the managed recovery.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

If you get an error here, you need to go to STEP#2 for bringing standby in sync.

If no error, then using the view v$managed_standby, verify that MRP process is started and has the status APPLYING_LOGS.

6. After this, check whether the logs are being applied on the standby or not:

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

After doing a recovery using the incremental backup, you will not see the sequence#'s which were visible earlier with APPLIED=NO because they have been absorbed as part of the incremental backup and applied on standby during recovery.

The APPLIED column starts showing YES for the logs which are being transmitted now, this means logs are being applied.

Check the status of MRP process in the view v$managed_standby. The status should be APPLYING_LOGS for the duration that available logs are being applied and once all available logs have been applied, the status should be WAITING_FOR_LOGS

7. Another check to verify that primary and standby are in sync. Run the following query on both standby and primary:

SQL> select max(sequence#) from v$log_history.

Output should be same on both databases.

*******************************************************************************************

STEP #2:

Since Managed recovery failed after applying the incremental backup, we need to recreate the controlfile of standby. The reason for recreating the controlfile is that the state of the database was same because the database_scn was not updated in the control file after applying the incremental backup while the scn for datafiles were updated. Consequently, the standby database was still looking for the old file to apply.

A good MOSC note for re-creating the controlfile in such a scenario is 734862.1.

Steps to recreate the standby controlfile and start the managed recovery on standby:

1. Take the backup of controlfile from primary

rman target sys/oracle@proddb catalog rman/cat@emrep
backup current controlfile for standby;

2. Copy the controlfile backup to the standby system (or if it is on the common NFS mount, no need to transfer or copy) and restore the controlfile onto the standby database

Shutdown all instances (If standby is RAC) of the standby.

sqlplus / as sysdba
shutdown immediate
exit

Startup nomount, one instance.

sqlplus / as sysdba
startup nomount
exit
Restore the standby control file.

rman nocatalog target /
restore standby controlfile from '/tmp/o1_mf_TAG20070220T151030_.bkp';
exit
3. Startup the standby with the new control file.

sqlplus / as sysdba
shutdown immediate
startup mount
exit
4.  Restart managed recovery in one instance (if standby is RAC) of the standby database:

sqlplus / as sysdba
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
The above statement may succeed without errors but the MRP process will still not start. The reason is that since the controlfile has been restored from the primary, it is looking for datafiles at the same location as are in primary instead of standby. For example, if the primary datafiles are located at '+DATA/proddb_1/DATAFILE' and standby datafiles are at '+DATA/proddb_2/DATAFILE', the new controlfile will show the datafile's location as '+DATA/proddb_1/DATAFILE'. This can be verified from the query "select name from v$datafile" on the standby instance. We need to rename all the datafiles to reflect the correct location.

There are two ways to rename the datafiles:

1. Without using RMAN

Change the parameter standby_file_management=manual in standby's parameter file.

ALTER DATABASE RENAME FILE '+DATA/proddb_1/datafile/users.310.620229743' TO '+DATA/proddb_2/datafile/USERS.1216.648429765';

2. Using RMAN

rman nocatalog target /

Catalog the files, the string specified should refer to the diskgroup/filesystem destination of the standby data files.

RMAN> catalog start with '+diskgroup/<dbname>/datafile/';

e.g.:

RMAN> catalog start with '+DATA/proddb_2/datafile/';

This will give the user a list of files and ask if they should all be cataloged. The user should review and say YES if all the datafiles are properly listed.

Once that is done, then commit the changes to the controlfile

RMAN> switch database to copy;

Now start the managed recovery as:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT

and check for processes in the view v$managed_standby.  MRP process should be there. It will also start applying all the archived logs that were missing since last applied log.  This process might take hours.

5. Another check to verify that primary and standby are in sync:

Run the following query on both standby and primary after all logs in v$archived_log show APPLIED=YES:

SQL> select max(sequence#) from v$log_history.

Output should be same on both databases.

*****************************************************************************************

STEP #3

After recreating the controlfile, you still find that logs are being transmitted but not being applied on the standby. Check the alert log of standby. For example, see if you find something similar to below snippet:

Fetching gap sequence in thread 1, gap sequence 74069-74095
Wed Sep 17 06:45:47 2015
RFS[1]: Archived Log: '+DATA/ipwp_sac1/archivelog/2008_09_17/thread_1_seq_74093.259.665649929'
Wed Sep 17 06:45:55 2015
Fetching gap sequence in thread 1, gap sequence 74069-74092
Wed Sep 17 06:45:57 2015
RFS[1]: Archived Log: '+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74094.258.665649947'
Wed Sep 17 06:46:16 2015
RFS[1]: Archived Log: '+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74095.256.665649957'
Wed Sep 17 06:46:26 2015
FAL[client]: Failed to request gap sequence
 GAP - thread 1 sequence 74069-74092
The contents of alert log shows that logs sequence# from 74069 to 74092 may have been transmitted but not applied. The view v$archived_log shows the sequence# starting from 74093 and APPLIED=NO.

So this situation means that logs up to 74068 were applied as part of the incremental backup and from 74069 to 74093 have been transferred to standby server but they must have failed to register with standby database. Try the following steps:

Locate the log sequence# shown in alert log (for example 74069 to 74092). For example, +DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74069.995.665630861
Register all these archived logs with the standby database.
alter database register logfile '+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74069.995.665630861';

alter database register logfile '+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74070.998.665631405';

alter database register logfile '+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74071.792.665633755';

alter database register logfile '+DATA/proddb_2/archivelog/2008_09_17/thread_1_seq_74072.263.665633713';

??..

?.and so on till the last one.


Now check the view v$archived_log and finally should see the logs being applied. The status of MRP should change from ARCHIVE_LOG_GAP to APPLYING_LOGS and eventually WAITING_FOR_LOGS.