Saturday, 4 July 2015

SCRIPTS :

Run the following script :


Show database version with options intalled :



set head off feed off pages 0 serveroutput on
col banner format a72 wrap
select banner
from   sys.v_$version;
select '   With the '||parameter||' option'
from   sys.v_$option
where  value = 'TRUE';
select '   The '||parameter||' option is not installed'
from   sys.v_$option
where  value <> 'TRUE';
begin
    dbms_output.put_line('Port String: '||dbms_utility.port_string);
end;
/



SCRIPTS :

Run the following script :



Lists all objects being accessed in the schema.

SET LINESIZE 255
SET VERIFY OFF

COLUMN object FORMAT A30

SELECT a.object,
       a.type,
       a.sid,
       b.serial#,
       b.username,
       b.osuser,
       b.program
FROM   v$access a,
       v$session b
WHERE  a.sid    = b.sid
AND    a.owner  = DECODE(UPPER('&1'), 'ALL', a.object, UPPER('&1'))
AND    a.object = DECODE(UPPER('&2'), 'ALL', a.object, UPPER('&2'))
ORDER BY a.object;





SCRIPTS :

Run the following script :


Example of a CROSS MATRIX report implemented using standard SQL.

SELECT job,
                sum(decode(deptno,10,sal)) DEPT10,
                sum(decode(deptno,20,sal)) DEPT20,
                sum(decode(deptno,30,sal)) DEPT30,
                sum(decode(deptno,40,sal)) DEPT40
           FROM scott.emp
       GROUP BY job
/



Count the number of rows for ALL tables in current schema

Run the following script:

-----------------------

set termout off echo off feed off trimspool on head off pages 0
spool on
spool C:\countall.txt
select 'SELECT count(*), '''||table_name||''' from '||table_name||';'
from   user_tables
/
spool off


-----------------------

Thanks.
NJ
Database Health Check (DBMS_HM)

The Oracle DBA wants to run Health Checks with Health Monitor upon his database.

With Oracle 11g, these checks can be done:
– DB Structure Integrity Check
– CF Block Integrity Check
– Data Block Integrity Check
– Redo Integrity Check
– Transaction Integrity Check
– Undo Segment Integrity Check
– Dictionary Integrity Check
– ASM Allocation Check

Viewing the list of checks that can be done on your database:

SELECT name
 FROM v$hm_check
WHERE internal_check = 'N';

Health checks accept input parameters, some are mandatory while others are optional.

Displaying parameter information for all health checks:

SELECT c.name check_name,
       p.name parameter_name,
       p.TYPE,
       p.DEFAULT_VALUE,
       p.description
  FROM v$hm_check_param p, v$hm_check c
 WHERE p.check_id = c.id AND c.internal_check = 'N'
ORDER BY c.name;

Running a check:

BEGIN
   DBMS_HM.run_check ('Dictionary Integrity Check', 'report_dictionary_integrity');
END;
/

or

BEGIN
   DBMS_HM.RUN_CHECK (check_name     => 'Transaction Integrity Check',
                      run_name       => 'my_transaction_run',
                      input_params   => 'TXN_ID=22.87.1');
END;

Viewing the first report in text format with DBMS_HM (HTML & XML format are also available):

SET LONG 100000
SET LONGCHUNKSIZE 1000
SET PAGESIZE 1000
SET LINESIZE 512
SELECT DBMS_HM.get_run_report ('report_dictionary_integrity') FROM DUAL;

Listing all the Health Check executed (Health Monitor View):

SELECT run_id,
       name,
       check_name,
       run_mode,
       status,
       src_incident,
       num_incident,
       error_number
  FROM v$hm_run;
 

Thanks.
NJ



Find some of the useful scripts used from the following link :

https://drive.google.com/file/d/0B6DbhsI8ALh-TU5iQy1NeVRTZk0/view?usp=sharing

Thanks.
NJ
Getting "ORA-00942: table or view does not exist" when missing REFERENCES privilege

SQL> create user usr1 identified by usr1;
User created.
SQL> grant dba to usr1;
Grant succeeded.
SQL> create user usr2 identified by usr2;
User created.
SQL> grant dba to usr2;
Grant succeeded.
SQL> conn usr1/usr1
Connected.
SQL> create table tbl_usr1 (id number);
Table created.
SQL> conn usr2/usr2
Connected.
SQL> create table tbl_usr2 (id number);
Table created.
SQL> insert into tbl_usr2 values(1);
1 row created.
SQL> commit;
Commit complete.
SQL> alter table tbl_usr2 add primary key(id);
Table altered.
SQL> create public synonym tbl_usr2 for tbl_usr2;
Synonym created.
SQL> disc
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
SQL> conn usr1/usr1
Connected.
SQL> select * from tbl_usr2;
        ID
----------
         1
SQL> alter table tbl_usr1 add foreign key (id) references tbl_usr2(id);
alter table tbl_usr1 add foreign key (id) references tbl_usr2(id)
                                                     *
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select * from tbl_usr2;
        ID
----------
         1

Although I can query the table, while creating foreign key and referencing to that table, I get “ORA-00942: table or view does not exist” error. The reason is that the user hasn’t REFERENCES privilege which should be granted

GRANT REFERENCES is a privilege required by a user on a table so that this user can create new tables referencing such tables in foreign keys where he/she would otherwise be restricted.

SQL> conn usr2/usr2
Connected.
SQL> grant references on tbl_usr2 to usr1;
Grant succeeded.
SQL> conn usr1/usr1
Connected.
SQL> alter table tbl_usr1 add foreign key (id) references tbl_usr2(id);
Table altered.
SQL>




Thanks.
NJ







HOW TO KNOW THE DATA PUMP IS LAGGING OR WHICH PROCESS IS EXECUTING WITH TIME ELAPSED :

While importing a dump file, sometimes it takes too long and seems to be “hanging” and processing something unknown in the background, or queuing for something. As a DBA, you HAVE NOT wait “until it finishes”. You have to try to find out the solution.


This is the command I run to import the dump file into the new database:

impdp system/sys@orcl directory=data_pump_dir dumpfile=dd.dmp logfile=dd1.log remap_schema=temp:champ metrics=y

--------

Import: Release 11.2.0.1.0 - Production on Sat Jul 4 14:59:38 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@orcl directory=data_pump_dir dumpfile=dd.dmp logfile=dd1.log remap_schema=temp:champ metrics=y
Processing object type SCHEMA_EXPORT/USER
     Completed 1 USER objects in 0 seconds
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
     Completed 1 SYSTEM_GRANT objects in 0 seconds
Processing object type SCHEMA_EXPORT/ROLE_GRANT
     Completed 3 ROLE_GRANT objects in 0 seconds
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
     Completed 1 DEFAULT_ROLE objects in 0 seconds
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
     Completed 1 PROCACT_SCHEMA objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/TABLE
     Completed 1 TABLE objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "CHAMP"."TBL_READ_ONLY"                     5.015 KB       1 rows
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
     Completed 1 TRIGGER objects in 0 seconds
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 14:59:46


--------

Thanks.
NJ

How to delete duplicate rows
Today I want to give you an example of how delete duplicate rows from a table

Let’s create new table with name students:

CREATE TABLE students (ID NUMBER, NAME VARCHAR2(10), surname VARCHAR2(10))

Now let’s add some duplicate rows to it:

INSERT INTO students VALUES (1, ‘Nitesh’, ‘Jauhari’);
INSERT INTO students VALUES (2, ‘Edward’, ‘Smith’);
INSERT INTO students VALUES (1, ‘Nitesh’, ‘Jauhari’);
INSERT INTO students VALUES (2, ‘Edward’, ‘Smith’);
INSERT INTO students VALUES (1, ‘Nitesh’, ‘Jauhari’);
INSERT INTO students VALUES (2, ‘Edward’, ‘Smith’);
And now let’s look to the data of our table:
SELECT * FROM students
1  Nitesh  Jauhari
2  Edward   Smith
1  Nitesh  Jauhari
2  Edward   Smith
1  Nitesh  Jauhari
2  Edward   Smith
Now let’s delete duplicate rows using this script:

DELETE FROM students  
WHERE ROWID NOT IN (SELECT MAX (ROWID) FROM students GROUP BY ID)

And now let’s look to students table

SELECT * FROM students

1  Nitesh  Jauhari
2  Edward   Smith


Thanks.
NJ
CREATING A READ ONLY TABLE USING TRIGGERS

In this example, we’ll create a table, insert data in it, and then create trigger to prevent any insert, update or delete statement on table

SQL> CREATE TABLE tbl_read_only (id NUMBER);

Table created.

SQL> INSERT INTO tbl_read_only VALUES(1);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT * FROM tbl_read_only;

        ID
———-
         1

SQL> CREATE TRIGGER trg_tbl_read_only BEFORE DELETE OR INSERT OR UPDATE
  2  ON tbl_read_only
  3  BEGIN
  4  RAISE_APPLICATION_ERROR(-20202,’Table Status: READ ONLY!!!’);
  5  END;
  6  /

Trigger created.

SQL> INSERT INTO tbl_read_only VALUES(2);
INSERT INTO tbl_read_only VALUES(2)
            *
ERROR at line 1:
ORA-20202: Table Status: READ ONLY!!!
ORA-06512: at “myschema.TRG_TBL_READ_ONLY”, line 2
ORA-04088: error during execution of trigger ‘myschema.TRG_TBL_READ_ONLY’
SQL>


Thanks.
NJ

 
Automatically display SID and connected User in Sql*Plus

In this example, we’ll change Sql*Plus view and display our database name and conneted user name instead of word “SQL ”

If you’re using Sql*Plus , you know that every time you type a command, you’re typing it after “SQL>”

SQL>show user
USER is “SYS”
SQL>

If you want to display your SID, your connected USER name instead of “SQL>” you’ve to change “glogin.sql” in your $ORACLE_SID/sqlplus/admin directory as below

set termout off
define gname=idle
column global_name new_value gname
select lower(user)||’@’ ||substr(global_name,1,decode(dot,0,length(global_name),dot-1)) global_name from (select global_name, instr(global_name,’.') dot from global_name);
set sqlprompt ‘&gname>’
set termout on

And after that adding an alias to your /home/oracle/.bashrc file:

alias sqlplus=’export ORACLE_SID=orcl;sqlplus “/ as sysdba”‘
every time from Linux terminal when you type sqlplus, you’ll connect automatically to database orcl as sysdba and your SQL> prompt will be changed to SYS@orcl>
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 10.1.0.3.0 – Production on Mon Dec 8 06:29:29 2008

Copyright (c) 1982, 2004, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options

sys@orcl>show user
USER is “SYS”


Thanks.
NJ

 
Solution to track DML statements on schema level

In one of the posts, user asked a question about DML triggers to be written for all schemas. user wanted to write a schema level DML trigger. As we now, it’s impossible to write schema level DML trigger. We can write DDL trigger to be fired on schema level.

Now, we have two choices.









1. To track all DML statements using Audit
2. To create one (INSERT, UPDATE, DELETE) trigger for all tables of the schema automatically

Its better to create audit for that tables, but as the user wanted it to be solved in triggering way, I found this type of solution:

In order to create trigger for each object, I followed below steps:



–I create a new user

SQL> create user test_trigger identified by kamran;
User created.

–Give it dba role (Don’t do it in production database)

SQL> grant dba to test_trigger;
Grant succeeded.

– Connect with that user

SQL> conn test_trigger/kamran
Connected.

– Create a new table

SQL> create table tab1 (id number);
Table created.

– Create a second table

SQL> create table tab2 (id number);
Table created.

– Create another table to log information which UPDATED, DELETED and INSERTED on two other tables

SQL> create table tab1_2_log (information varchar2(20));
Table created.

– Write a PL/SQL block to create a trigger for each table of the newly created user automatically

SQL> DECLARE
2 CURSOR all_tables
3 IS
4 SELECT table_name
5 FROM user_tables
6 WHERE table_name ‘TAB1_2_LOG’;
7
8
9 BEGIN
10 FOR rec_cur IN all_tables
11 LOuser
12 EXECUTE IMMEDIATE ‘create or replace trigger trg_’
13 || rec_cur.table_name
14 || ‘
15 before insert or update or delete on ‘
16 || rec_cur.table_name
17 || ‘
18 declare
19 begin
20 if UPDATING then
21 insert into tab1_2_log values(”UPDATING on ‘||rec_cur.table_name||”’);
22 elsif DELETING then
23 insert into tab1_2_log values(”DELETING on ‘||rec_cur.table_name||”’);
24 elsIF INSERTING then
25 insert into tab1_2_log values(”INSERTING on ‘||rec_cur.table_name||”’);
26 end if;
27 end;’;
28 END LOuser;
29 END;
30 /

PL/SQL procedure successfully completed.

1. Here, I create a cursor to take all tables in newly created schema
2. Open the cursor and get name of tables in my schema
3. Create EXECUTE IMMEDIATE statement and run creation of trigger by passing it the name of each table

Now, I begin to test my triggers by inserting to, updating and deleting data from tables

SQL> insert into tab1 values(1);
1 row created.

SQL> update tab1 set id=1;
1 row updated.

SQL> delete from tab1;
1 row deleted.

SQL> insert into tab2 values(1);
1 row created.

SQL> update tab2 set id=1;
1 row updated.

SQL> delete from tab2 ;
1 row deleted.

– Now, let’s check what’s in our log table
SQL> select * from tab1_2_log;

INFORMATION
——————–
INSERTING on TAB1
UPDATING on TAB1
DELETING on TAB1
INSERTING on TAB2
UPDATING on TAB2
DELETING on TAB2
6 rows selected.

– As it seen, all information related DELETING, UPDATING and INSERTING on tables have been gathered. Now, check name of triggers which was created on that schema to track all DML statement of each table
SQL> SELECT trigger_name, table_name FROM all_triggers WHERE table_owner=’TEST_TRIGGER’;

TRIGGER_NAME TABLE_NAME
—————————— ——————————
TRG_TAB1 TAB1
TRG_TAB2 TAB2

To get more information related Triggers, please refer to documentation :
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/triggers.htm#i6061


Thanks.
NJ

>>>SCRIPTS :

User access to database within limits of time

Here's a logon script and showed how it’s working:

C:\Documents and Settings\Administrator>sqlplus "/as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 28 14:56:58 2009

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> create user orcl identified by orcl;

User created.

SQL> grant connect, resource to orcl;

Grant succeeded.

SQL> conn orcl/orcl
Connected.
SQL> disc
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options
SQL> conn / as sysdba
Connected.
SQL> CREATE OR REPLACE TRIGGER limit_connection
  2     AFTER LOGON ON DATABASE
  3  BEGIN
  4     IF USER = 'orcl' THEN
  5        IF to_number(TO_CHAR (SYSDATE, 'hh24')) BETWEEN 8 AND 22
  6        THEN
  7           RAISE_APPLICATION_ERROR(-20998,' Dear user '||USER||'! You can''t login between 08 and 22');
  8        END IF;
  9     END IF;
10  END limit_connection;
11  /

Trigger created.

SQL> select to_char(sysdate,'hh24') from dual;

TO
--
23

SQL> conn orcl/orcl
Connected.
SQL> select to_char(sysdate,'hh24') from dual;

TO
--
18

SQL> conn orcl/orcl
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-20998:  Dear user orcl! You can't login between 08 and 22
ORA-06512: at line 5


Warning: You are no longer connected to ORACLE.
SQL>


Cheers!


 >>>SCRIPTS :

Block developers from using TOAD and other tools on production databases

A nice example at www.psoug.org on how to prevent users from using additional tools to connect to production database. This is the AFTER LOGON trigger create ON DATABASE as follows:

CONNECT / AS SYSDBA;

CREATE OR REPLACE TRIGGER block_tools_from_prod
  AFTER LOGON ON DATABASE
DECLARE
  v_prog sys.v_$session.program%TYPE;
BEGIN
  SELECT program INTO v_prog
    FROM sys.v_$session
  WHERE  audsid = USERENV('SESSIONID')
    AND  audsid != 0  -- Don't Check SYS Connections
    AND  ROWNUM = 1;  -- Parallel processes will have the same AUDSID's

  IF UPPER(v_prog) LIKE '%TOAD%' OR UPPER(v_prog) LIKE '%T.O.A.D%' OR -- Toad
     UPPER(v_prog) LIKE '%SQLNAV%' OR     -- SQL Navigator
     UPPER(v_prog) LIKE '%PLSQLDEV%' OR -- PLSQL Developer
     UPPER(v_prog) LIKE '%BUSOBJ%' OR   -- Business Objects
     UPPER(v_prog) LIKE '%EXCEL%'       -- MS-Excel plug-in
  THEN
     RAISE_APPLICATION_ERROR(-20000, 'Development tools are not allowed here.');
  END IF;
END;
/
SHOW ERRORS

http://www.psoug.org/snippet/Block_TOAD_and_other_tools_516.htm


Cheers!



>>>SCRIPTS:

DBMS_UTILITY.GET_PARAMETER_VALUE


There’re different ways to get the values of the parameter file using PL/SQL. One of them is using DBMS_UTILITY.GET_PARAMETER_VALUE function. In the following example, I print the value of the compatible parameter of the parameter file:

[sourcecode language="css"]

SQL>set serveroutput on;
 declare
    id number;
    str varchar2(40);
 begin
    id:=dbms_utility.get_parameter_value(‘compatible’,id,str);
    dbms_output.put_line(str);
 end;
 /
SQL>   10.2.0.2.0
PL/SQL procedure successfully completed. [/sourcecode]


Thanks.
Default listener “LISTENER” is not configured when running DBCA

When running dbca to create a new database you can get the following message:

Default Listener “LISTENER” is not configured in Grid Infrastructure home. Use NetCA to configure Default Listener and rerun DBCA

default_listener_problem



Actually, there’s no need to run netca, all you need is to create a new listener as follows:

srvctl add listener

srvctl start listener



Thanks.

 
Backup and Recovery Scenarios: User managed and through Rman


This  Backup and Recovery Scenarios contains two sets of examples, one of
User Managed and other using Rman. (It is about 2 years old but still relevant)

Within the examples:

1. Complete Closed Database Recovery.
        System tablespace is missing

2. Complete Open Database Recovery.
        Non system tablespace is missing

3. Complete Open Database Recovery
        (when the database is initially closed).
        Non system tablespace is missing

4. Recovery of a Missing Datafile that has no backups.

5. Restore and Recovery of a Datafile to a different location.

6. Control File Recovery

7. Incomplete Recovery.
        Until Time/Sequence/Cancel



Thanks.

 
To monitor used and available space for the whole FRA, use the following two views:



- V$RECOVERY_FILE_DEST - displays information about the disk quota and current disk usage in the flash recovery area. Here you can find the space limit and currently used space in FRA.


- V$FLASH_RECOVERY_AREA_USAGE - displays usage information about flashback recovery areas separated by file types. Here you can view the percentage of FRA that is used from every type of file including flashback log files. This view is available in 10g R2 only.



Thanks.


Monitoring of Flashback Database with Data Dictionary Views

   V$FLASHBACK_DATABASE_LOG, GV$FLASHBACK_DATABASE_LOG (for RAC) - displays information about the flashback data. Use this view to help estimate the amount of flashback space required for the current workload.

    V$FLASHBACK_DATABASE_STAT, GV$FLASHBACK_DATABASE_STAT (for RAC) - displays statistics for monitoring the I/O overhead of logging flashback data. This view also displays the estimated flashback space needed based on previous workloads.

    V$RECOVERY_FILE_DEST - displays information about the disk quota and current disk usage in the flash recovery area. Here you can find the space limit and currently used space in FRA.

    V$FLASHBACK_DATABASE_LOGFILE, GV$FLASHBACK_DATABASE_LOGFILE (for RAC) - display detailed info about flashback log files. Here you can find the start SCN for every flashback log file.

    V$FLASH_RECOVERY_AREA_USAGE - displays usage information about flashback recovery areas separated by file types. Here you can view the percentage of FRA that is used from every type of file including flashback log files. This view is available in 10g R2 only.

    V$ARCHIVED_LOG, V$LOG_HISTORY, GV$ARCHIVED_LOG, GV$LOG_HISTORY - displays archived log information from the control file, including archive log names. You can use it check availability of the needed archived redo log files before starting of a Flashback Database operation.

    V$LOGFILE, V$LOG, GV$LOGFILE, GV$LOG - displays information about redo log files. You can use it check availability of the needed archived redo log files before starting of a Flashback Database operation.

    V$DATABASE – query the FLASHBACK_ON column to find whether the Flashback Database feature is enabled or not at the database level

    V$TABLESPACE – query the FLASHBACK_ON column to find whether the flashback logging is enabled or not for a specific tablespace

    V$RESTORE_POINT - displays information about restore points. Here you can find the map between a specific restore point, SCN and TIMESTAMP value.

    V$SGAINFO - to find the current size of the redo log buffer, fixed SGA and the granule.

    V$SGASTAT - displays detailed information on the SGA and its structures.


Thanks.


Perform Flashback via Enterprise Manager

You can perform a Flashback Database operation via Enterprise Manager as well.
Go to Tab: Maintenance -> Section: Backup/Recovery -> Link: Perform Recovery


Dealing with Excluded Tablespaces in case of Flashback Database Operations

If some of your tablespaces are excluded from flashback logging it means that they are not with critical importance for your system and the database is allowed to be open without the data contained within them.
If you want to perform a Flashback Database operation with excluded tablespaces, try to follow these steps:
1. Find the desired point in time for the flashback operation. Usually it is the time just before the time when some human error caused a logical damage of the database data. You can calculate this as SCN or a Date (TIMESTAMP expression). Additionally, you can use a restore point or a time just before last RESETLOGS operation. A Log Sequence Number can be used in 10g R1 but it is not available as an option in 10g R2.
2. Find all datafiles that are with disabled flashback logging:
SQL> SELECT NAME tbs_name,file_name db_file,flashback_on fb_logging, online_status status FROM v$tablespace,dba_data_files WHERE NAME=tablespace_name;


3. Ensure that you have available backups for all datafiles that have disabled flashback logging:
RMAN> LIST BACKUP BY FILE;
If you have a RMAN catalog, you can query the view: RC_BACKUP_DATAFILE_SUMMARY

4. Ensure that all needed archived and redo logs (covering the whole time period between oldest_flashback_time column value and the time of failure) are physically available:
4.1 Check the physical availability:
RMAN> CROSSCHECK ARCHIVELOG FROM SCN oldest_flashback_scn;
Where oldest_flashback_scn is the value of the column oldest_flashback_scn in V$FLASHBACK_DATABASE_LOG.
4.2 Check the logical availability:
RMAN> REPORT UNRECOVERABLE;
REPORT UNRECOVERABLE command reports all datafiles that cannot be recovered from existing backups because redo may be missing.
You can check the logical availability of archived redo logs with the following SQL as well:
SQL> SELECT * FROM v$archived_log WHERE first_change# > = oldest_flashback_scn AND status = 'A';
Where oldest_flashback_scn is the value of the column oldest_flashback_scn in V$FLASHBACK_DATABASE_LOG.

5. Find and write down the current SCN (or create a normal restore point if you are using 10g R2). Don’t forget that the Flashback Database is a change on the database and it must be recorded if some failure occurs within this operation.
SQL> SELECT current_scn FROM v$database;
or
SQL> CREATE RESTORE POINT before_flashback_operation;
To map a time value and SCN, you can use the built-in SQL functions (available since 10g):
SCN_TO_TIMESTAMP – to find associated timestamp with this SCN
TIMESTAMP_TO_SCN – to find associated SCN with this timestamp
SQL> SELECT current_scn, scn_to_timestamp(current_scn) FROM v$database;
This timestamp mapping information is recorded in the SMON_SCN_TIME table.
Oracle keeps the information in SMON_SCN_TIME table for a period of 5 days.

6. Restart the database in MOUNT stage:
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT


7. Put the datafiles with disables flashback logging in OFFLINE mode but with the DROP option.
If you try to use immediately the Flashback Database command, it will return an error. You should put all excluded from flashback logging datafiles in OFFLINE mode:

Now, if you try to do Flashback, the Flashback Database operation will succeed but another error will be returned:

You must again put the needed datafiles in OFFLINE mode but with the DROP option. After this command the offlined datafiles cannot be recovered without performing of a datafile media recovery.

If you want to perform a Flashback Database operation (when some tablespaces are excluded from flashback logging), you should use this command:
SQL> ALTER DATABASE DATAFILE ‘E:\ORACLE\ORADATA\ORCL102\FB_TEST01.DBF’ OFFLINE FOR DROP;
This will let the flashback operation to complete successfully and the database to be opened as fast as possible.

8. Execute the FLASHBACK DATABASE command

9. Open the Database with RESETLOGS clause:
SQL> ALTER DATABASE OPEN RESETLOGS;

10. Recover offline tablespaces.
At this moment, when all important data is already available and accessible to the end users, you can try to perform media recovery of all offlined datafiles while the database is open. You can do this with RMAN:
RMAN> RESTORE TABLESPACE fb_users;
RMAN> RECOVER TABLESPACE fb_users;
Now you can bring recovered datafiles in ONLINE mode:
SQL> ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\ORCL102\FB_TEST01.DBF' ONLINE;
SQL> ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\ORCL102\FB_TEST02.DBF' ONLINE;

It is possible SYSTEM tablespace to be excluded from flashback logging:

In this case you will not be able to use the Flashback Database operation to perform fast database point-in-time recovery because you cannot open the database without to perform media recovery of the whole database:

Disabling the SYSTEM tablespace will make the Flashback Database feature useless because the only way a DBITR to be performed is by using the standard media recovery with restore of datafile and applying of redo logs.
According to all tests – excluding of tablespaces from flashback logging can safe some space on your storage but in many cases will lead to problems and serious difficulties of using of the Flashback Database feature. Exclude a tablespace from flashback logging only if you are sure that you will be able to perform fast database point-in-time recovery in case of need.


Thanks.

 
Using of Flashback Database

Database can be flashed back either through SQL*Plus or RMAN.

  1. In SQL*Plus, you can use this feature by issuing of the SQL command FLASHBACK DATABASE statement. For the command syntax, see: Oracle Database SQL Reference - 10g Release 2 (10.2) - Chapter 18 SQL Statements: DROP SEQUENCE to ROLLBACK - FLASHBACK DATABASE
  2. In RMAN, you can use this feature by issuing of the command FLASHBACK. For the command syntax, see: Oracle Database Backup and Recovery Reference - 10g Release 2 (10.2) - Chapter 2 RMAN Commands - FLASHBACK Command
If there is an event (human error) that insists the database to be flashed back to a point in time, you can do this in few steps:

1. Ensure that you are not dealing with some kind of media failure. The Flashback Database is useful only when your data is logically corrupted.

2. Find the desired point in time for the flashback operation. Usually it is he time just before the time when some human error caused a logical damage of the database data. You can calculate this as SCN or a Date (TIMESTAMP expression). Additionally, you can use a restore point or a time just before last RESETLOGS operation. A Log Sequence Number can be used in 10g R1 but it is not available as an option in 10g R2 (via SQL*Plus). It is still available as an option in RMAN.

3. Find the current oldest SCN and time to which the database can be backwarded:
SQL> SELECT oldest_flashback_scn,oldest_flashback_time FROM v$flashback_database_log;
If the desired time is after the oldest time currently recorded in the flashback log files then go to the next step

4. Ensure that there are not tablespaces with disabled flashback logging:
SQL> SELECT NAME tbs_name,file_name db_file,flashback_on fb_logging, online_status status FROM v$tablespace,dba_data_files WHERE NAME=tablespace_name;
If there are files with no important data that are excluded from flashback logging then you can follow steps from the section: Dealing with Excluded Tablespaces in case of Flashback Database Operations
Otherwise, go to the next step.

5. Ensure that all needed archived and redo logs (covering the whole time period between oldest_flashback_time column value and the time of failure) are available:
5.1 Check physical availability:
RMAN> CROSSCHECK ARCHIVELOG FROM SCN oldest_flashback_scn;
Where oldest_flashback_scn is the value of the column oldest_flashback_scn in V$FLASHBACK_DATABASE_LOG.
5.2 Check logical availability:
RMAN> REPORT UNRECOVERABLE;
REPORT UNRECOVERABLE command reports all datafiles that cannot be recovered from existing backups because redo may be missing.
You can check the logical availability of archived redo logs with the following SQL as well:
SQL> SELECT * FROM v$archived_log WHERE first_change# > = oldest_flashback_scn AND status = 'A';
Where oldest_flashback_scn is the value of the column oldest_flashback_scn in V$FLASHBACK_DATABASE_LOG.

6. Find and write down the current SCN (or create a normal restore point if you are using 10g R2). Don’t forget that the Flashback Database is a change on the database and it must be recorded if some failure occurs during this operation.
SQL> SELECT current_scn FROM v$database;
or
SQL> CREATE RESTORE POINT before_flashback_operation;
To map a time value and SCN, you can use the built-in SQL functions (available since 10g):
SCN_TO_TIMESTAMP – to find associated timestamp with this SCN
TIMESTAMP_TO_SCN – to find associated SCN with this timestamp
SQL> SELECT current_scn, scn_to_timestamp(current_scn) FROM v$database;
This timestamp mapping information is recorded in the SMON_SCN_TIME table.
Oracle keeps the information in SMON_SCN_TIME table for a period of 5 days.

7. Restart the database in MOUNT stage
7.1 SQL> SHUTDOWN IMMEDIATE
7.2 SQL> STARTUP MOUNT
For RAC environments, use:
SQL> STARTUP MOUNT EXCLUSIVE
If you want to be able to return the database to its state just as it was closed, you can create a restore point at this moment (restore points can be created even in MOUNT stage):
SQL> CREATE RESTORE POINT just_after_shutdown;
Or to find the last checkpointed SCN:
SQL> SELECT checkpoint_change# FROM v$database;
In the next step, if you perform a few flashback operations and you need to return the database to its state just before to be closed, you will need the above SCN.

8. Flashback the Database
For this example the point of time before the logical corruption is:
As SCN: 4125962274
As Timestamp: 24.05.2006 15:10
In SQL it can be done by FLASHBACK DATABASE command. In RMAN it can be done by FLASHBACK command.
Different types of usage:
8.1. Flashes back the database to a point in time just as in the specified SCN:
SQL> FLASHBACK DATABASE TO SCN 4125962274; /* returns database to its state as in the point in time marked with SCN 4125962274 */
8.2. Flashes back the database to a point in time just as in the specified TIMESTAMP (by using of custom TIMESTAMP value):
SQL> FLASHBACK DATABASE TO TIMESTAMP to_timestamp('24.05.2006 15:10','dd.mm.yyyy hh24:mi'); /* returns database to its state as in the point in time 24.05.2006 15:10 */
8.3. Flashes back the database to a point in time just as in the specified TIMESTAMP (by using of SYSTIMESTAMP)
SQL> FLASHBACK DATABASE TO TIMESTAMP (SYSTIMESTAMP -1/24); /* returns database to its state as in the point in time one hour ago */
8.4. Flashes back the database to a point in time just as in the specified TIMESTAMP (by using of SYSTIMESTAMP and INTERVAL):
SQL> FLASHBACK DATABASE TO TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' MINUTE); /* returns database to its state as in the point in time one minute ago */
8.5. Flashes back the database to a point in time just as in the specified restore point
SQL> FLASHBACK DATABASE TO RESTORE POINT before_flashback_operation; /* returns database to its state as in the point in time marked with the restore point with name before_flashback_operation */
8.6. Flashbacks the database to the point in time just before the last RESETLOGS operation
SQL> FLASHBACK DATABASE TO BEFORE RESETLOGS;
8.7. Flashes back the database to a SCN just before the specified SCN
SQL> FLASHBACK DATABASE TO BEFORE SCN 4125962274;
8.8. Flashes back the database to a point in time one second before the specified TIMESTAMP
SQL> FLASHBACK DATABASE TO BEFORE TIMESTAMP to_timestamp('24.05.2006 15:10','dd.mm.yyyy hh24:mi');
SQL> FLASHBACK DATABASE TO BEFORE TIMESTAMP (SYSDATE -1/24); SQL> FLASHBACK DATABASE TO BEFORE TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' MINUTE);
8.9. Flashes back the database to a point in time just before the specified restore point
SQL> FLASHBACK DATABASE TO BEFORE RESTORE POINT before_flashback_operation;
8.10. Flashing back of the database via RMAN
RMAN> FLASHBACK DATABASE TO SCN 4125962274; RMAN> FLASHBACK DATABASE TO TIMESTAMP (SYSDATE-1/24); RMAN> FLASHBACK DATABASE TO TIMESTAMP to_timestamp('24.05.2006 15:10','dd.mm.yyyy hh24:mi');
RMAN> FLASHBACK DATABASE TO RESTORE POINT before_flashback_operation; RMAN> FLASHBACK DATABASE TO BEFORE RESETLOGS;

Let’s demonstrate the ‘Time Machine’ capabilities of this feature:


As it can be seen, the Flashback feature allows you to shift within the time as it doesn’t exist. In some articles it is called a “rewind” button but its abilities as a “forward” button make this feature to behave as a “Time Machine” device for your data.
I want to note here again that you can move forward in time only with applying of redo logs. You can backward in time by applying flashback logs along with smaller amount of redo data.
Be aware of that you will be able to travel through the time only if you are well prepared for it. You must have all flashback and archived redo logs for the whole time period. With the Flashback Database you can return database to its state as in every point in time in an extremely fast way. The speed of this operation depends only of the number of the applied flashback logs and this mainly depends of the number of the data block changes within the database for this time period. It doesn’t depend on the database size because no restore operation is performed. Usually this operation can complete even within several seconds or minutes and can be between 25 to 105 times faster than usual incomplete media recovery.
Flashback logs are still populated during every performed flashback operation. RVWR doesn’t care if the database is in either MOUNT or OPEN stage. Flashback logs are generated because the data blocks within datafiles are changing and the before-images of data must be recorded as well even that the database is not opened. You can see this behavior in the following example:


9. Open the Database with OPEN RESETLOGS clause:
SQL> ALTER DATABASE OPEN RESETLOGS;



Thanks.


 
How to Configure Flashback Database?

1. Before to configure Flashback Database, you must ensure that:

  • Your database is running in ARCHIVELOG mode
  • FRA (Flash Recovery Area) is configured
  • FRA is configured by two parameters:
  • DB_RECOVERY_FILE_DEST
  • DB_RECOVERY_FILE_DEST_SIZE
For more info about configuring the FRA, see: Oracle Database Backup and Recovery Basics - 10g Release 2 (10.2) - Chapter 3.5 Setting Up a Flash Recovery Area for RMAN
  • COMPATIBILITY initialization parameter is set to 10.0 or higher value

2. Consider the value for flashback database window.
It can be set by an initialization parameter: DB_FLASHBACK_RETENTION_TARGET. The range of SCNs for which there is currently enough flashback log data to support the FLASHBACK DATABASE command is called the Flashback Database Window.
The flashback window depends of the current environment requirement but usually it can be between 24 and 72 hours. Don’t consider too big values for flashback window because if you need to perform flashback database operation to a point in time older than several days it can be slower and much more time-consuming operation than using of media recovery method.
Flashback Database is efficient and fast for smaller time spans when the error is immediately (or within few hours) discovered and performing of a Database Point-in-Time Recovery is the only one way to recover from this human error.
Default value for DB_FLASHBACK_RETENTION_TARGET is 1440. It is measured in minutes, so by default flashback window is 24 hours.
Keep in mind that flashback logs use additional space within the FRA. For more info about sizing of FRA to include Flashback Logs, see:
Oracle Database Backup and Recovery Basics - 10g Release 2 (10.2) - Chapter 5.3 Setup and Maintenance for Oracle Flashback Database - Sizing the Flash Recovery Area to Include Flashback Logs

3. Connect as user SYS with SYSDBA privileges.

4. Configure DB_FLASHBACK_RETENTION_TARGET:
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440 SCOPE=BOTH;
5. Shut down the database:
SQL> SHUTDOWN IMMEDIATE

6. Start the database in MOUNT stage.
To enable Flashback Database feature, your database must be in MOUNT mode.
SQL> STARTUP MOUNT
For RAC environments, use:
SQL> STARTUP MOUNT EXCLUSIVE

7. Enable Flashback Database Feature:
SQL> ALTER DATABASE FLASHBACK ON;

8. Check whether the Flashback Database feature is enabled or not:
SQL> SELECT flashback_on FROM V$DATABASE;

9. Open the database with enabled Flashback Database feature:
SQL> ALTER DATABASE OPEN;

Configuration with Enterprise Manager:
The Flashback Database Feature can be enabled via Enterprise Manager as well.
Go to: Maintenance Tab -> Backup/Recovery Settings -> Recovery Settings


Excluding Tablespaces from Flashback Logging
By default, flashback logs are generated for all permanent tablespaces. If you want, you can disable flashback logging for specific tablespaces.
For example, if you want disable flashback generation for specific tablespace you can use the following command:
SQL> ALTER TABLESPACE USERS FLASHBACK OFF;

If you want to re-enable flashback logging for this tablespace, you can do it only in MOUNT stage.
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> ALTER TABLESPACE USERS FLASHBACK ON;
SQL> ALTER DATABASE OPEN;
During a Flashback Operation you must use more actions to handle with missing flashback data for these tablespaces. For more info, see section: Dealing with Excluded Tablespaces During Flashback Operations.



Thanks.



 
What are the Benefits of Flashback Database?

According to many studies and reports, Human Error accounts for 30-35% of data loss episodes. This makes Human Errors one of the biggest single causes of downtime.
With Flashback Database feature Oracle is trying to fight against user and operator errors in an extremely fast and effective way.
You can find more info about most common causes of data loss in the following sources:

In most cases, a disastrous logical failure caused by human error can be solved by performing a Database Point-in-Time Recovery (DBPITR). Before 10g the only way to do a DBPITR was incomplete media recovery. Media Recovery is a slow and time-consuming process that can take a lot of hours. On the other side, by using of Flashback Database a DBPITR can be done in an extremely fast way: 25 to 105 times faster than usual incomplete media recovery and in result it can minimize the downtime significantly.
For more info about the recovery speed advantages of Flashback Database, you can use the following source:
Oracle Flashback Technology, an Oracle article by Ron Weiss, Director of Product Management in Oracle's Database Kernel Development Group


Flashback Database provides:
  • Very effective way to recover from complex human errors
  • Faster database point-in-time recovery
  • Simplified management and administration
  • Little performance overhead
It provides a lot of benefits and almost no disadvantages.
I will try to discuss all of them in the sections below.
The Flashback Database is not just your database “rewind” button. It is a “Time Machine” for your Database data that is one single command away from you.

The Flashback Database Architecture

Flashback Database uses its own type of log files, called Flashback Database Log Files.
To support this mechanism, Oracle uses new background process called RVWR (Recovery Writer) and a new buffer in the SGA, called Flashback Buffer.
The Oracle database periodically logs before images of data blocks in the flashback buffer. The flashback buffer records images of all changed data blocks in the database. This means that every time a data block in the database is altered, the database writes a before image of this block to the flashback buffer. This before image can be used to reconstruct a datafile to the current point of time.
The maximum allowed memory for the flashback buffer is 16 MB. You don’t have direct control on its size. The flashback buffer size depends on the size of the current redo log buffer that is controlled by Oracle. Starting at 10g R2, the log buffer size cannot be controlled manually by setting the initialization parameter LOG_BUFFER.
In 10G R2, Oracle combines fixed SGA area and redo buffer together. If there is a free space after Oracle puts the combined buffers into a granule, that space is added to the redo buffer. The sizing of the redo log buffer is fully controlled by Oracle. According to SGA and its atomic sizing by granules, Oracle will calculate automatically the size of the log buffer depending of the current granule size. For smaller SGA size and 4 MB granules, it is possible redo log buffer size + fixed SGA size to be multiple of the granule size. For SGAs bigger than 128 MB, the granule size is 16 MB.
You can see current size of the redo log buffer, fixed SGA and granule by querying the V$SGAINFO view.
You can query the V$SGASTAT view to display detailed information on the SGA and its structures.
To find current size of the flashback buffer, you can use the following query:
SQL> SELECT * FROM v$sgastat WHERE NAME = 'flashback generation buff';There is no official information from Oracle that confirms the relation between 'flashback generation buff' structure in SGA and the real flashback buffer structure. This is only a suggestion.
A similar message message is written to the alertSID.log file during opening of the database:
Allocated 3981204 bytes in shared pool for flashback generation buffer Starting background process RVWR RVWR started with pid=16, OS id=5392
For more info about the sizing restrictions of log buffer in 10g R2, see Metalink Note: 351857.1: The Log_buffer Cannot be Changed In 10g R2
For more info about the granule model in the SGA (available since Oracle9i), see Metalink Note: 148495.1: Oracle9i New Feature: Dynamic SGA
The flashback buffer size is at least two times bigger than the log buffer size. This assertion can be found in OCP Oracle Database 10g : New Features for Administrators by Sam Alapati
Although flashback buffer size is expected to be at least two times bigger than log buffer size, its real size can be smaller. In tested 10g R2 environment, flashback buffer has around two times smaller size as the redo log buffer.
Granule Size: 4194304
Fixed SGA Size: 1289832
Log Buffer: 7098368
Flashback Buffer (flashback generation buff): 3981204
This rule is mentioned indirectly in the Oracle documentation: Oracle Database Backup and Recovery Basics - 10g Release 2 (10.2) - Chapter 5.3 Setup and Maintenance for Oracle Flashback Database - Performance Tuning for Flashback Database:
For large, production databases, set the init.ora parameter LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.
The RVWR background process has responsibility to write the contents of flashback buffer to flashback log files. This process is started automatically when flashback database feature is enabled.
RVWR writes periodically flashback buffer contents to flashback database logs. It is an asynchronous process and you don’t have control over it. All available sources are saying that RVWR writes periodically to flashback logs. The explanation for this behavior is that Oracle is trying to reduce the I/O and CPU overhead that can be an issue in many production environments.
The RVWR writing mechanism and its behavior needs more investigation that will not be discussed here. It can be seen that it is controlled by few hidden parameters in a similar way like writing to redo log files – depending of generated flashback redo and time interval.
Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
Flashback log files can be written only under FRA (Flash Recovery Area). FRA is closely related and is built on top of Oracle Managed Files (OMF). OMF is a service that automates naming, location, creation and deletion of database files. By using OMF and FRA, Oracle manages easily flashback log files. They are created with automatically generated names with extension .FLB. For instance, this is the name of one flashback log file: O1_MF_26ZYS69S_.FLB
By its nature flashback logs are similar to redo log files. LGWR writes contents of the redo log buffer to online redo log files, RVWR writes contents of the flashback buffer to flashback database log files. Redo log files contain all changes that are performed in the database, that data is needed in case of media or instance recovery. Flashback log files contain only changes that are needed in case of flashback operation. The main differences between redo log files and flashback log files are:
  • Flashback log files are never archived - they are reused in a circular manner.
  • Redo log files are used to forward changes in case of recovery while flashback log files are used to backward changes in case of flashback operation
Flashback log files can be compared with UNDO data (contained in UNDO tablespaces) as well. While UNDO data contains changes at the transaction level, flashback log files contain UNDO data at the data block level. While UNDO tablespace doesn’t record all operations performed on the database (for instance, DDL operations), flashback log files record that data as well. In few words, flashback log files contain the UNDO data for your database. To summarize:
  • UNDO data doesn’t contain all changes that are performed in the database while flashback logs contain all altered blocks in the database
  • UNDO data is used to backward changes at the transaction level while flashback logs are used to backward changes at the database level
You can query the V$FLASHBACK_DATABASE_LOGFILE to find detailed info about your flashback log files. Although this view is not documented it can be very useful to check and monitor generated flashback logs.

There is a new record section within the control file header that is named FLASHBACK LOGFILE RECORDS. It is similar to LOG FILE RECORDS section and contains info about the lowest and highest SCN contained in every particular flashback database log file

***************************************************************************
FLASHBACK LOGFILE RECORDS
***************************************************************************
(size = 84, compat size = 84, section max = 2048, section in-use = 136,
last-recid= 0, old-recno = 0, last-recno = 0)
(extent = 1, blkno = 139, numrecs = 2048)
FLASHBACK LOG FILE #1:
(name #4) E:\ORACLE\FLASH_RECOVERY_AREA\ORCL102\FLASHBACK\O1_MF_26YR1CQ4_.FLB
Thread 1 flashback log links: forward: 2 backward: 26
size: 1000 seq: 1 bsz: 8192 nab: 0x3e9 flg: 0x0 magic: 3 dup: 1
Low scn: 0x0000.f5c5a505 05/20/2006 21:30:04
High scn: 0x0000.f5c5b325 05/20/2006 22:00:38
What does a Flashback Database operation?

When you perform a flashback operation, Oracle needs all flashback logs from now on to the desired time. They will be applied consecutively starting from the newest to the oldest.
For instance, if you want to flashback the database to SCN 4123376440, Oracle will read flsahback logfile section in control file and will check for the availability of all needed flashback log files. The last needed flashback log should be this with Low scn and High scn values between the desired SCN 4123376440.
In current environment this is the file with name: O1_MF_26YSTQ6S_.FLB and with values of:
Low SCN: 4123374373
High SCN: 4123376446
Note: If you want to perform successfully a flashback operation you will always need to have available at least one archived (or online redo) log file. This is a particular file that contains redo log information about changes around the desired flashback point in time (SCN 4123376440). In this case, this is the archived redo log with name: ARC00097_0587681349.001 that has values of:
First change#: 4123361850
Next change#: 4123380675
The flashback operation will not succeed without this particular archived redo log.
The reason for this: Flashback log files contain information about before-images of data blocks, related to some SCN (System Change Number). When you perform flashback operation to SCN 4123376440, Oracle cannot apply all needed flashback logs and to complete successfully the operation because it applying before-images of data. Oracle needs to restore each data block copy (by applying flashback log files) to its state at a closest possible point in time before SCN 4123376440. This will guarantee that the subsequent “redo apply” operation will forward the database to SCN 4123376440 and the database will be in consistent state. After applying flashback logs, Oracle will perform a forward operation by applying all needed archive log files (in this case redo information from the file: ARC00097_0587681349.001) that will forward the database state to the desired SCN.
Oracle cannot start applying redo log files before to be sure that all data blocks are returned to their state before the desired point in time. So, if desired restore point of time is 10:00 AM and the oldest restored data block is from 09:47 AM then you will need all archived log files that contain redo data for the time interval between 09:47 AM and 10:00 AM. Without that redo data, the flashback operation cannot succeed.
A short explanation of this issue can be found in the documentation:
Oracle Database Backup and Recovery Basics -
10g Release 2 (10.2) - Chapter 5.1 - Restore Points and Flashback Database: Concepts
When a database is restored to its state at some past target time using Flashback Database, each block changed since that time is restored from the copy of the block in the flashback logs most immediately prior to the desired target time. The redo log is then used to re-apply changes since the time that block was copied to the flashback logs.
Note: Redo logs must be available for the entire time period spanned by the flashback logs, whether on tape or on disk. (In practice, however, redo logs are generally needed much longer than the flashback retention target to support point-in-time recovery.)
There is a lack of information about this specific behavior of the Flashback Database feature. The above quote is the only one place in Oracle documentation where it is mentioned.
There is another source that is trying to explain this behavior in a comprehensible way:
OCP Oracle Database 10g: New Features for Administrators – Chapter 9: Flashback Enhancements, by Sam Alapati

Flashback logs are not independent. They can be used only with the redo data that contains database changes around the desired SCN. This means that if you want to have working flashback window (and to be able to restore the database to any point in time within this window) you need to ensure the availability of redo logs as well.
If you are familiar with this information then you will be able to work in a better way with this feature and to ensure that it will help you to perform faster recovery without unexpected problems.

Availability within Oracle’s Editions

Oracle Flashback Database feature is available only in Oracle Enterprise and Oracle Personal Editions.
For more info, see Metalink Note: 271886.1: Differences Between Different Editions of Oracle Database 10G


Thanks.
 
The Flashback Database Feature


Introduction

The Flashback Database is one of the most powerful new features in Oracle 10g coming as a part of Backup & Recovery enhancements.
In this post you can find the result of my research and shared experience about this feature.

What is the Oracle Flashback Database?

Flashback Database is a part of the backup & recovery enhancements in Oracle 10g Database that are called Flashback Features.
The main purpose of Flashback Technology is to let you quickly recover from logical corruptions or user errors.
Oracle Flashback Features include the following:

    Flashback DATABASE
    Flashback DROP
    Flashback TABLE
    Flashback QUERY
    Flashback VERSIONS QUERY
    Flashback TRANSACTION QUERY


What says Oracle’s documentation about the Flashback Database Feature:
Oracle Database Backup and Recovery Basics -
10g Release 2 (10.2) - Chapter 5.1 - Restore Points and Flashback Database: Concepts

    Flashback Database enables you to wind your entire database backward in time, reversing the effects of unwanted database changes within a given time window. The effects are similar to database point-in-time recovery.

And:

    It is similar to conventional point in time recovery in its effects, allowing you to return a database to its state at a time in the recent past.

And:

    Flashback Database can be used to reverse most unwanted changes to a database, as long as the datafiles are intact.

From Oracle Database Concepts - 10g Release 2 (10.2) - Chapter 15 - Backup & Recovery

    Oracle Flashback Database lets you quickly recover an Oracle database to a previous time to correct problems caused by logical data corruptions or user errors.

Another quote from Metalink Note 268197.1: New Background Processes In 10g:

    - This a new feature introduced in 10g
    - Flashbacking a database means going back to a previous database state
    - The Flashback Database feature provides a way to quickly revert an entire Oracle database to the state it was in at a past point in time
    - This is different from traditional point in time recovery
    …
    - One can use Flashback Database to back out changes that:
    - Have resulted in logical data corruptions
    - Are a result of user error
    - This feature is not applicable for recovering the database in case of media failure
   
   
Thanks.
Here is the best script for killing sessions in Oracle, just run the script and copy/paste the output to kill sessions.

select ses.USERNAME,
    substr(MACHINE,1,10) as MACHINE,
    substr(module,1,25) as module,
    status,
    'alter system kill session '''||SID||','||ses.SERIAL#||''';' as kill
from v$session ses LEFT OUTER JOIN v$process p ON (ses.paddr=p.addr)
where schemaname <> 'SYS'
    and not exists
    (select 1
        from DBA_ROLE_PRIVS
        where GRANTED_ROLE='DBA'
            and schemaname=grantee)
    and machine!='yourlocalhostname'
order by LAST_CALL_ET desc;


Thanks.