Monday 16 February 2015

How To Check Tablespace Name and corresponding Size ?

Answer> Execute the following statement.

select file_name,status,tablespace_name,bytes/1024/1024 from dba_data_files;

Cheers!!!


To get the % useage of all available Open_cursors : 

Answer >

SELECT 'session_cached_cursors' parameter, LPAD(value, 5) value,
DECODE(value, 0, ' n/a', to_char(100 * used / value, '990') || '%' ) usage
FROM ( SELECT MAX(s.value) used
FROM v$statname n, v$sesstat s
WHERE
n.name = 'session cursor cache count' and
s.statistic# = n.statistic# ),
(SELECT value
FROM v$parameter
WHERE
name = 'session_cached_cursors' )
UNION ALL
SELECT 'open_cursors', LPAD(value, 5), to_char(100 * used / value, '990') || '%'
FROM
(SELECT MAX(sum(s.value)) used
FROM v$statname n, v$sesstat s
WHERE
n.name in ('opened cursors current', 'session cursor cache count') and s.statistic#=n.statistic#
GROUP BY
s.sid
),
(SELECT value
FROM v$parameter
WHERE
name = 'open_cursors' )
/


How To Set The Values For Open_Cursors ?

Answer > Assuming that you are using a spfile to start the database

------------------
alter system set open_cursors = 4000 scope=both;
------------------

If you are using a pfile instead, you can change the setting for the running instance

-------------------
alter system set open_cursors = 4000
-------------------

You would also then need to edit the parameter file to specify the new open_cursors setting. It would generally be a good idea to restart the database shortly thereafter to make sure that the parameter file change works as expected (it's highly annoying to discover months later the next time that you reboot the database that some parameter file change than no one remembers wasn't done correctly).

I'm also hoping that you are certain that you actually need more than 300 open cursors per session. A large fraction of the time, people that are adjusting this setting actually have a cursor leak and they are simply trying to paper over the bug rather than addressing the root cause.

Check the value you have updated for Cursors :


select max(a.value) as highest_open_cur, p.value as max_open_cur
  from v$sesstat a, v$statname b, v$parameter p
  where a.statistic# = b.statistic#
  and b.name = 'opened cursors current'
  and p.name= 'open_cursors'
  group by p.value;

          OR

SELECT
sid,user_name, COUNT(*) "Cursors per session"
FROM v$open_cursor
GROUP BY sid,user_name;

How to set Open_Cursors ?

Answer > Execute the following statement after connecting to your Oracle DB.

select max(a.value) as highest_open_cur, p.value as max_open_cur
  from v$sesstat a, v$statname b, v$parameter p
  where a.statistic# = b.statistic#
  and b.name = 'opened cursors current'
  and p.name= 'open_cursors'
  group by p.value;

The value of 'max_open_cur' will be the value for Open_Cursors.


How To Rename A REDO Log file ?


Answer >  You can rename a REDO Log file by executing following command.

ALTER DATABASE RENAME FILE 'E:\APP\ORADATA\PKT008\REDO017.LOG', 'E:\APP\ORADATA\PKT008\REDO017.LOG' TO 'E:\APP\ORADATA\PKT008\REDO017.LOG', 'E:\APP\ORADATA\PKT008\REDO017.LOG';

Note : Make sure to change your parameters accordingly.
How To Switch A REDO LogFile?


Answer > Connect to the database with SYSDBA system privilege and execute the following command :

alter system switch logfile;


How to Drop a Redo Log File ?

Answer > Connect to the Oracle database and execute the following SQL statements :

First, check that which REDO log file is INACTIVE by executing the following statement.

select group#, status, bytes/1024/1024 from V$log;


Now, you may drop a logfile which is INACTIVE.

alter database drop logfile group 1;                                                   ---For dropping group 1 logfile.

After dropping the logfile, remove the entry of the logfile(e.g. REDO01) from the location at which you created the logfile if you want to create a logfile with the same name in future.

'E:\APP\ORACLE\ORADATA\PKT009\REDO01.LOG'

Cheers!!!



How To Check Redo Logs And Their Size and Add Redo Logs to an Oracle Database ?

Answer >  Connect to the Oracle Database and execute the following.

SQL> select group#, status, bytes/1024/1024 from V$log;
                                OR
SQL> select member, status from V$logfile;

You will get all the information from the above statements.

Now, add the redo logfiles using the following commands :

alter database add logfile group 1 'E:\APP\ORACLE\ORADATA\PKT009\REDO01.LOG' size 50M;

alter database add logfile group 2 'E:\APP\ORACLE\ORADATA\PKT009\REDO2.LOG' size 50M;

alter database add logfile group 3 'E:\APP\ORACLE\ORADATA\PKT009\REDO3.LOG' size 50M;

alter database add logfile group 4 'E:\APP\ORACLE\ORADATA\PKT009\REDO4.LOG' size 50M;

alter database add logfile group 5 'E:\APP\ORACLE\ORADATA\PKT009\REDO5.LOG' size 50M;

alter database add logfile group 6 'E:\APP\ORACLE\ORADATA\PKT009\REDO6.LOG' size 50M;

alter database add logfile group 7 'E:\APP\ORACLE\ORADATA\PKT009\REDO7.LOG' size 50M;

alter database add logfile group 8 'E:\APP\ORACLE\ORADATA\PKT009\REDO8.LOG' size 50M;

Note : You can modify the path and size accordingly. 

Is privilege "SYSDBA" = "DBA" Role ?

Answer > There's a big difference between them. For sysdba role, you can look to documentation
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/dba.htm#sthref137

And from documentation we see that:

The DBA role does not include the SYSDBA or SYSOPER system privileges
Additional differences between a user with the SYSDBA system privilege and a user with DBA role are that a SYSDBA authenticates via the password file and can edit tables in the SYS schema such as USER$, plus the SYSDBA connection will show as SYS and will be audited as SYS in mandatory audit. Of course most SYSDBA users have also been granted the DBA role as well. The difference with actual SYS is that they HAVE to logon as SYSDBA whereas a different DBA user granted SYSDBA system privilege can choose not to…
http://www.oracleforensics.com/wordpress/index.php/2008/09/21/bypassing-ora-01997/
How To Check Memory_Target ?

Answer > Connect to the database with SYSDBA system privilege and execute the following command :

show parameter memory_target
How To Check SGA_MAX_SIZE ?

Answer > Connect to the database with SYSDBA system privilege and execute the following command :

show parameter sga_max_size
How To Kill Multiple Sessions In Oracle 10G/11G ?

Answer> Execute the  following steps in SQL Plus or CMD Window after connecting to the Oracle Instance.

select ses.USERNAME,
    substr(MACHINE,1,10) as MACHINE,
    substr(module,1,25) as module,
    status,
    'alter system kill session '''||SID||','||ses.SERIAL#||''';' as kill
from v$session ses LEFT OUTER JOIN v$process p ON (ses.paddr=p.addr)
where schemaname <> 'SYS'
    and not exists
    (select 1
        from DBA_ROLE_PRIVS
        where GRANTED_ROLE='DBA'
            and schemaname=grantee)
    and machine!='yourlocalhostname'
order by LAST_CALL_ET desc;

----------------------------------

set lines 200 pages 200;

select username, sid,machine from v$session where username is not null;

--------------------------------

Cheers!!!


How To Take A RMAN Backup in Oracle 10G/11G?


STEPS :


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

sqlplus / as sysdba

SQL> startup mount
ORACLE instance started.

Total System Global Area  778387456 bytes
Fixed Size                  1374808 bytes
Variable Size             260048296 bytes
Database Buffers          511705088 bytes
Redo Buffers                5259264 bytes
Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

C:\Documents and Settings\Prod>rman target 'sys/sys@orcl_new as sysdba'

Recovery Manager: Release 11.2.0.1.0 - Production on Fri Jan 30 17:52:07 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL_NEW (DBID=2985633894)

RMAN> backup database plus archivelog;

Starting backup at 30-JAN-15
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=16 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=1 STAMP=870371596
channel ORA_DISK_1: starting piece 1 at 30-JAN-15
channel ORA_DISK_1: finished piece 1 at 30-JAN-15
piece handle=D:\APP\Prod\FLASH_RECOVERY_AREA\ORCL_NEW\BACKUPSET\2015_01_30\O1_MF_ANNNN_TAG20150130T175339_BDPXYN9R_.BKP tag=TAG20150130T175339 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 30-JAN-15

Starting backup at 30-JAN-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=D:\APP\Prod\ORADATA\ORCL_NEW\SYSTEM01.DBF
input datafile file number=00002 name=D:\APP\Prod\ORADATA\ORCL_NEW\SYSAUX01.DBF
input datafile file number=00003 name=D:\APP\Prod\ORADATA\ORCL_NEW\UNDOTBS01.DBF
input datafile file number=00004 name=D:\APP\Prod\ORADATA\ORCL_NEW\USERS01.DBF
channel ORA_DISK_1: starting piece 1 at 30-JAN-15
channel ORA_DISK_1: finished piece 1 at 30-JAN-15
piece handle=D:\APP\Prod\FLASH_RECOVERY_AREA\ORCL_NEW\BACKUPSET\2015_01_30\O1_MF_NNNDF_TAG20150130T175351_BDPXYRG5_.BKP tag=TAG20150130T175351 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:05
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 30-JAN-15
channel ORA_DISK_1: finished piece 1 at 30-JAN-15
piece handle=D:\APP\Prod\FLASH_RECOVERY_AREA\ORCL_NEW\BACKUPSET\2015_01_30\O1_MF_NCSNF_TAG20150130T175351_BDPY0TQX_.BKP tag=TAG20150130T175351 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 30-JAN-15

Starting backup at 30-JAN-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4 RECID=2 STAMP=870371700
channel ORA_DISK_1: starting piece 1 at 30-JAN-15
channel ORA_DISK_1: finished piece 1 at 30-JAN-15
piece handle=D:\APP\Prod\FLASH_RECOVERY_AREA\ORCL_NEW\BACKUPSET\2015_01_30\O1_MF_ANNNN_TAG20150130T175500_BDPY0WXS_.BKP tag=TAG20150130T175500 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 30-JAN-15

Cheers!!!!!

Sunday 15 February 2015

How to Import the tables from the DUMP into the Oracle Database when tables with same name already exist in the Database ?

Answer > Sometimes it happens that we need to import table into an existing table.If we import the table in that schemas it throws error regarding the existence of the particular table.If we have to preserve the old data of table and append the new data,we can use the table_exists_action parameter of data pump.The valid key words are {SKIP | APPEND | TRUNCATE | REPLACE}.

The possible values of the following effects are :

1.) SKIP  : leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.By default the value is SKIP .

2.) APPEND loads rows from the source and leaves existing rows unchanged.

3.) TRUNCATE deletes existing rows and then loads rows from the source.

4.) REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.


Here is a  DEMO of the TABLE_EXISTS_ACTION parameter  :

First of all we will take the export table (say test ) which is in TEST schemas.

C:\>expdp system/xxxx@noida directory=dpump tables=TEST.test dumpfile=TEST_test.dmp logfile=exp_TESTlog.log

Export: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:21:28
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/********@noida directory=dpump tables=TEST.test dumpfile=TEST_test.dmp logfile=exp_TESTlog.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "TEST"."TEST"                               5.062 KB       9 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DPUMP\TEST_TEST.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 13:23:13

Now we consider each of the valid keywords of  action_exists_append parameter.

Case 1 : action_exists_append=skip (by defaults)

C:\>impdp system/xxxx@noida directory=dpump full=y  dumpfile=TEST_test.dmp logfile=imp_TESTlog.log
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 13:32:22
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump full=y dumpfile=TEST_test.dmp logfile=imp_TESTlog.log
Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39151: Table "TEST"."TEST" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:32:35

Hence, above results shows that the table is skipped .

Case 2 : table_exists_action=append 
Now we delete the table test and recreate populate it values.

SQL> drop table test;
Table dropped.
SQL> create table test (id number);
Table created.

SQL> insert into test values (&Y);
Enter value for y: 111
old   1: insert into test values (&Y)
new   1: insert into test values (123)
1 row created.

SQL> /
Enter value for y: 222
old   1: insert into test values (&Y)
new   1: insert into test values (234)
1 row created.

SQL> /
Enter value for y: 333
old   1: insert into test values (&Y)
new   1: insert into test values (345)
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
       111
       222
       333

Now we will import the dump in TEST schemas having table "test"

SQL>HOST impdp system/xxxx@noida directory=dpump dumpfile=TEST_TEST.dmp table_exists_action=append

Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:22:39
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=TEST_TEST.dmp table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39152: Table "TEST"."TEST" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing     object type         TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:22:58

SQL> select * from test;
        ID
----------
       111
       222
       333
        11
        22
        33
        44
        55
        66
        77
        88

        ID
----------
        99
12 rows selected.

Hence we find that the imported table appends in existing table . This parameter only import the data of the tables and skips the indexes  .

Case 3 :  table_exists_action=truncate
we have already 12 rows in table "test" .Now we again import the dump having 9 rows.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=TEST_TEST.dmp table_exists_action=truncate

Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:26:35
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=TEST_TEST.dmp table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39153: Table "TEST"."TEST" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 14:26:51

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Case 4  : table_exists_action= replace
Now we will add few rows in table "test" to check the results.

SQL> insert into test values(1234);
1 row created.
SQL> insert into test values(12345);
1 row created.
SQL> insert into test values(34567);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
      1234
     12345
     34567
12 rows selected.

SQL>host impdp system/xxxx@noida directory=dpump dumpfile=TEST_TEST.dmp table_exists_action=replace

Import: Release 11.1.0.6.0 - Production on Tuesday, 19 April, 2011 14:33:23
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@noida directory=dpump dumpfile=TEST_TEST.dmp table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."TEST"                               5.062 KB       9 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 14:33:42

SQL> select  *  from test ;
        ID
----------
        11
        22
        33
        44
        55
        66
        77
        88
        99
9 rows selected.

Hence, table_exists_action=replace parameter internally drop and recreate the table .Hence all the existing metadata also  get dropped and is recreated .
Note: Parameter table_exists_action=replace for a job with no metadata will not get imported .