Friday 24 July 2015


Active Session History (ASH) Analytics in Oracle Enterprise Manager Cloud Control 12c
The Active Session History (ASH) was introduced in Oracle 10g to provide real-time diagnostics information. ASH Analytics is a feature of Enterprise Manager Cloud Control 12c, which visualises ASH information, making it even simpler to diagnose performance problems.

ASH Analytics is listed as requirement for the 1Z0-060 "Upgrade to Oracle Database 12c" OCP upgrade exam, hence this short overview for those who don't use Cloud Control.

ASH requires the Diagnostics and Tuning option in Enterprise Edition.

Once you've navigated to a database target, the "ASH Analytics" menu will be available under the "Performance" section.

ASH Analytics : Menu




-     On the top-left of the page you will see buttons to quickly alter the monitoring period displayed in the overview graph.

-   The greyed out, selected, area of the activity graph is expanded into a more detailed activity graph below.

-   All the screens under the ASH Analytics section can be saved or emailed by pressing the relevant button.


-The edges of the selection on the overview graph can be repositioned by dragging them with your mouse.  -The expanded activity graph will refresh to match your selection.
- The dropdown allows you to alter the contents of the expanded activity graph.
-The dropdown allows you to customise the contents of this detail section.
-The dropdown allows you to customise the contents of this detail section.



The dropdowns contain the following metrics.

Top Dimensions : Wait Class, Wait Event, Instance, Service, Module, Action, User Session, SQL ID
   
SQL : SQL ID, Top Level SQL ID, SQL Force Matching Signature, SQL Plan Hash Value, SQL Plan Operation, SQL Plan Operation Line, SQL Opcode, Top Level SQL Opcode

   PL/SQL : PL/SQL, Top Level PL/SQL
 
  Resource Consumption : Wait Class, Wait Event, Object, Blocking Session
 
  Session Identifiers : Instance, Service, User Session, Parallel Process, User ID, Program, Session Type
 
  Session Attributes : Consumer Group, Module, Action, Client, Transaction ID, Execution Context ID

     Hovering over the expanded activity graph will highlight the section you are above. Clicking it will apply a filter and redraw the graph based on that filter.

    The filter can be removed by click the "X" on the filter.



Clicking the "Load Map" button will redraw the activity as a load map.



Thanks.


























Emergency Monitoring in Oracle Enterprise Manager Cloud Control 12c

The Emergency Monitoring feature of Enterprise Manager Cloud Control 12c is an evolution of Memory Access Mode from Grid Control 11g, allowing you to access in-memory statistics. In EM12c you no longer need to explicitly enable memory access mode. Just navigate to the page and wait while it makes a diagnostic connection to the database and performs lightweight analysis. The screen contains information that will help analyze performance issues, even if the database is hung and you can't connect to the normal performance pages.

Emergency Monitoring is listed as requirement for the 1Z0-060 "Upgrade to Oracle Database 12c" OCP upgrade exam, hence this short overview for those who don't use Cloud Control.

    Emergency Monitoring
    Command Line

Emergency Monitoring

If you are struggling to connect to the main performance pages for the target database, you can use emergency monitoring. The "Emergency Monitoring" menu option is available under the "Performance" section once you've navigated to the target database.



The resulting screen shows "Host : Runnable Processes", "Average Active Sessions", "I/O" and "Hang Analysis".




Use this information to determine your next course of action.





Command Line

You don't need Cloud Control to perform emergency monitoring. Once connected to the database, you can query views such as V$SESS_TIME_MODEL, V$SESSTAT, V$SQLSTATS, V$ACTIVE_SESSION_HISTORY, DBA_BLOCKERS, DBA_WAITERS etc.

Hang analysis can be performed using oradebug hanganalyze, as described here.

$ sqlplus / as sysdba

SQL> oradebug hanganalyze 3

On versions prior to 11g, you can run hanganalyze from a preliminary connection, which may help if you are trying to connect to a hung database, so a normal connection is not possible.

$ sqlplus -prelim / as sysdba

SQL>  oradebug hanganalyze 3

From 11g onward, this method of hanganalyze will still appear to work, but the resulting trace file will include an error message, as documented here.

Tanel Poder discussed a workaround for this here. Use the setospid to connect to an existing process and let that run the hanganalyze for you by including the dump keyword. Thanks to Martin Bach for pointing this out.

$ sqlplus -prelim / as sysdba

SQL> oradebug setospid <os_pid>
SQL> oradebug dump hanganalyze 3


Thanks.
NJ


Thursday 23 July 2015

Interesting thing :

FLASHEARTH :

How EARTH looks at night by NASA?

http://www.flashearth.com/


 

Thursday 16 July 2015

A Brief presentation on Oracle SQL Tuning for those who love Performance troubleshooting in Oracle.

Link for Youtube:

https://www.youtube.com/watch?v=mKUY2rtd5eo



Thanks,
NJ
A presentation on SQL Tuning :

Please download from below link :

https://drive.google.com/file/d/0B6DbhsI8ALh-WVNKSWNRd0lrWWM/view?usp=sharing

Thanks,
NJ

Tuesday 14 July 2015


How Oracle Locking Works

Myth: Oracle has a Central Locking System


A week or so ago I started a new series – 100 Things You Probably didn’t Know about Oracle Database . Part 1 of the series proved quite popular, as I surmise from the emails I received. Here is the Part 2 of the series.

When a transaction updates a row, it puts a lock so that no one can update the same row until it commits. When another transaction issues an update to the same row, it waits until the first one either commits or rolls back. After the first transaction performs a commit or rollback, the update by the second transaction is executed immediately, since the lock placed by the first transaction is now gone. How exactly does this locking mechanism work? Several questions come to mind in this context:


    Is there some kind of logical or physical structure called lock?
    How does the second transaction know when the first transaction has lifted the lock?
    Is there some kind of “pool” of such locks where transactions line up to get one?
    If so, do they line up to return it when they are done with the locking?
    Is there a maximum number of possible locks?
    Is there something called a block level lock? Since Oracle stores the rows in blocks, when all or the majority of rows in the blocks are locked by a single transaction, doesn’t it make sense for to lock the entire block to conserve the number of locks?
    The previous question brings up another question – does the number of active locks in the database at any point really matter?



If you are interested to learn about all this, please read on.

Lock Manager

Since locks convey information on who has what rows modified but not committed, anyone interested in making the update much check with some sort of system that is available across the entire database. So, it makes perfect sense to have a central locking system in the database, doesn’t it? But, when you think about it, a central lock manager can quickly become a single point of contention in a busy system where a lot of updates occur. Also, when a large number of rows are updated in a single transaction, an equally large number of locks will be required as well. The question is: how many? One can guess; but it will be at best a wild one. What if you guessed on the low side and the supply of available locks is depleted? In that case some transactions can’t get locks and therefore will have to wait (or, worse, abort). Not a pleasant thought in a system that needs to be scalable. To counter such a travesty you may want to make the available supply of locks really high. What is the downside of that action? Since each lock would potentially consume some memory, and memory is finite, it would not be advisable to create an infinite supply of locks.

Some databases actually have a lock manager with a finite supply of such locks. Each transaction must ask to get a lock from it before beginning and relinquish locks to it at the completion. In those technologies, the scalability of application suffers immensely as a result of the lock manager being the point of contention. In addition, since the supply of locks is limited, the developers need to commit frequently to release the locks for other transactions. When a large number of rows have locks on them, the database replaces the row locks with a block level lock to cover all the rows in the block – a concept known as lock escalation. Oracle does not follow that approach. In Oracle, there no central lock manager, no finite limit on locks and there is no such concept called lock escalation. The developers commit only when there is a logical need to do so; not otherwise.

Lock Management in Oracle

So, how is that approach different in case of Oracle? For starters, there is no central lock manager. But the information on locking has to be recorded somewhere. Where then? Well, consider this: when a row is locked, it must be available to the session, which means the session’s server process must have already accessed and placed the block in the buffer cache prior to the transaction occurring. Therefore, what is a better place for putting this information than right there in the block (actually the buffer in the buffer cache) itself?

Oracle does precisely that – it records the information in the block. When a row is locked by a transaction, that nugget of information is placed in the header of the block where the row is located. When another transaction wishes to acquire the lock on the same row, it has to access the block containing the row anyway (as you learned in Part 1 of this series) and upon reaching the block, it can easily confirm that the row is locked from the block header. A transaction looking to update a row in a different block puts that information on the header of that block. There is no need to queue behind some single central resource like a lock manager. Since lock information is spread over multiple blocks instead of a single place, this mechanism makes transactions immensely scalable.

Being the smart reader you are, you are now hopefully excited to learn more or perhaps you are skeptical. You want to know the nuts and bolts of this whole mechanism and, more, you want proof. We will see all that in a moment.

Transaction Address

Before understanding the locks, you should understand clearly what a transaction is and how it is addressed. A transaction starts when an update to data such as insert, update or delete occurs (or the intention to do so, e.g. SELECT FOR UPDATE) and ends when the session issues a commit or rollback. Like everything else, a specific transaction should have a name or an identifier to differentiate it from another one of the same type. Each transaction is given a transaction ID. When a transaction updates a row (it could also insert a new row or delete an existing one; but we will cover that little later in this article), it records two things:


    The new value
    The old value



The old value is recorded in the undo segments while the new value is immediately updated in the buffer where the row is stored. The data buffer containing the row is updated regardless of whether the transaction is committed or not. Yes, let me repeat – the data buffer is updated as soon as the transaction modifies the row (before commit). If you didn’t know that, please see the Part 1 of this series.

Undo information is recorded in a circular fashion. When new undo is created, it is stored in the next available undo “slot”. Each transaction occupies a record in the slot. After all the slots are exhausted and a new transaction arrives, the next processing depends on the state of the transactions. If the oldest transaction occupying any of the other slots is no longer active (that is either committed or rolled back), Oracle will reuse that slot. If none of the transactions is inactive, Oracle will have to expand the undo tablespace to make room. In the former case (where a transaction is no longer active and its information in undo has been erased by a new transaction), if a long running query that started before the transaction occurred selects the value, it will get an ORA-1555 error. But that will be covered in a different article in the future. If the tablespace containing the undo segment can’t extend due to some reason (such as in case of the filesystem being completely full), the transaction will fail.

Speaking of transaction identifiers, it is in the form of three numbers separated by periods. These three numbers are:


    Undo Segment Number where the transaction records its undo entry
    Slot# in the undo segment
    Sequence# (or wrap) in the undo slot



This is sort of like the social security number of the transaction. This information is recorded in the block header. Let’s see the proof now through a demo.

Demo

First, create a table:

SQL> create table itltest (col1 number, col2 char(8));


Insert some rows into the table.

SQL> begin
  2     for i in 1..10000 loop
  3             insert into itltest values (i,'x');
  4     end loop;
  5     commit;
  6  end;
  7  /


Remember, this is a single transaction. It started at the “BEGIN” line and ended at “COMMIT”. The 10,000 rows were all inserted as a part of the same transaction. To know the transaction ID of this transaction, Oracle provides a special package - dbms_transaction. Here is how you use it. Remember, you must use it in the same transaction. Let’s see:

SQL> select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID
------------------------------------------------------------------------


1 row selected.


Wait? There is nothing. The transaction ID returned is null. How come?

If you followed the previous section closely, you will realize that the transaction ends when a commit or rollback is issued. The commit was issued inside the PL/SQL block. So, the transaction had ended before you called the dbms_transaction is package. Since there was no transaction, the package returned null.

Let’s see another demo. Update one row:

SQL> update itltest set col2 = 'y' where col1 = 1;

1 row updated.


In the same session, check the transaction ID:

SQL> select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID
-------------------------------------------------------------------------
3.23.40484

1 row selected.


There you see – the transaction ID. The three numbers separated by period signify undo segment number, slot# and record# respectively. Now perform a commit:

SQL> commit;

Commit complete.


Check the transaction ID again:

SQL> select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID
-------------------------------------------------------------------------


1 row selected.


The transaction is gone so the ID is null, as expected.

Since the call to the package must be in the same transaction (and therefore in the same session), how can you check the transaction in a different session? In real life you will be asked to check transaction in other sessions, typically application sessions. Let’s do a slightly different test. Update the row one more time and check the transaction:

SQL> update itltest set col2 = 'y' where col1 = 1;

1 row updated.

SQL> select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID
-----------------------------------------------------------------------
10.25.31749

1 row selected.


From a different session, check for active transactions. This information is available in the view V$TRANSACTION. There are several columns; but we will look at four of the most important ones:


    ADDR – the address of the transaction, which is a raw value
    XIDUSN – the undo segment number
    XIDSLOT – the slot#
    XIDSQN – the sequence# or record# inside the slot



SQL> select addr, xidusn, xidslot, xidsqn
  2  from v$transaction;

ADDR         XIDUSN    XIDSLOT     XIDSQN
-------- ---------- ---------- ----------
3F063C48         10         25      31749


Voila! You see the transaction id of the active transaction from a different session. Compare the above output to the one you got from the call to dbms_transaction package. You can see that the transaction identifier shows the same set of numbers.

Interested Transaction List

You must be eager to know about the section of the block header that contains information on locking and how it records it. It is a simple data structure called "Interested Transaction List" (ITL), a list that maintains information on transaction. The ITL contains several placeholders (or slots) for transactions. When a row in the block is locked for the first time, the transaction places a lock in one of the slots. In other words, the transaction makes it known that it is interested in some rows (hence the term "Interested Transaction List"). When a different transaction locks another set of rows in the same block, that information is stored in another slot and so on. When a transaction ends after a commit or a rollback, the locks are released and the slot which was used to mark the row locks in the block is now considered free (although it is not updated immediately - fact about which you will learn later in a different installment).

[Updated Jan 22, 2011] [Thank you, Randolph Geist (info@www.sqltools-plusplus.org) for pointing it out. I follow his blog http://oracle-randolf.blogspot.com/, which is a treasure trove of information.
The row also stores a bit that represents the whether it is locked or not.
[end of Update Jan 22, 2011]


ITLs in Action

Let's see how ITLs really work. Here is an empty block. The block header is the only occupant of the block.



This is how the block looks like after a single row has been inserted:






 


Note, the row was inserted from the bottom of the block. Now, a second row has been inserted:

  


A session comes in and updates the row Record1, i.e. it places a lock on the row, shown by the star symbol. The lock information is recorded in the ITL slot in the block header:

  

The session does not commit yet; so the lock is active. Now a second session - Session 2 - comes in and updates row Record2. It puts a lock on the record - as stored in the ITL slot.

  


I have used two different colors to show the locks (as shown by the star symbol) and the color of the ITL entry.

As you can clearly see, when a transaction wants to update a specific row, it doesn’t have to go anywhere but the block header itself to know if the row is locked or not. All it has to do is to check the ITL slots. However ITL alone does not show with 100% accuracy that row is locked (again, something I will explain in a different installment). The transaction must go to the undo segment to check if the transaction has been committed. How does it know which specifci part of the undo segment to go to? Well, it has the information in the ITL entry. If the row is indeed locked, the transaction must wait and retry. As soon as the previous transaction ends, the undo information is updated and the waiting transaction completes its operation.

So, there is in fact a queue for the locks, but it's at the block level, not at the level of the entire database or even the segment.

Demo

The proof is in the pudding. Let’s see all this through a demo. Now that you know the transaction entry, let’s see how it is stored in the block header. To do that, first, we need to know which blocks to look for. So, we should get the blocks numbers where the table is stored:

SQL> select file_id, relative_fno, extent_id, block_id, blocks
  2  from dba_extents
  3  where segment_name = 'ITLTEST';

   FILE_ID RELATIVE_FNO  EXTENT_ID   BLOCK_ID     BLOCKS
---------- ------------ ---------- ---------- ----------
         7            7          0       3576          8
         7            7          1       3968          8
         7            7          2       3976          8
         7            7          3       3984          8


To check inside the block, we need to “dump” the contents of the block to a tracefile so that we can read it. From a different session issue a checkpoint so that the buffer data is now written to the dis:

SQL> alter system checkpoint;


Now dump the data blocks 3576 through 3583.

SQL> alter system dump datafile 7 block min 3576 block max 3583;

System altered.


This will create a tracefile in the user dump destination directory. In case of Oracle 11g, the tracefile will be in the diag structure under /diag/rdbms///trace directory. It will be most likely the last tracefile generated. You can also get the precise name by getting the OS process ID of the session:


SQL> select p.spid
  2  from v$session s, v$process p
  3  where s.sid = (select sid from v$mystat where rownum < 2)
  4  and p.addr = s.paddr
  5  /

SPID
------------------------
9202

1 row selected.

Now look for a file named _ora_9202.trc. Open the file in vi and search for the phrase “Itl”. Here is an excerpt from the file:


Itl           Xid                  Uba          Flag  Lck        Scn/Fsc
0x01   0x000a.019.00007c05  0x00c00288.1607.0e  ----    1  fsc 0x0000.00000000
0x02   0x0003.017.00009e24  0x00c00862.190a.0f  C---    0  scn 0x0000.02234e2b


This is where the information on row locking is stored. Remember, the row locking information is known as Interested Transaction List (ITL) and each ITL is stored in a “slot”. Here it shows two slots, which is the default number. Look for the one where the “Lck” column shows a value. It shows “1”, meaning one of the rows in the blocks is locked by a transaction. But, which transaction? To get that answer, note the value under the “Xid” column. It shows the transaction ID - 0x000a.019.00007c05. These numbers are in hexadecimal (as indicated by the 0x at the beginning of the number). Using the scientific calculator in Windows, I converted the values to decimal as 10, 25 and 31749 respectively. Do they sound familiar? Of course they do; they are exactly as reported by both the record in v$transaction and the dbms_transaction.local_transaction_id function call.

This is how Oracle determines that there is a transaction has locked the row and correlates it to the various components in the other areas – mostly the undo segments to determne if it is active. Now that you know undo segments holds the transaction details, you may want to know more about the segment. Remember, the undo segment is just a segment, like any other table, indexes, etc. It resides in a tablespace, which is on some datafile. To find out the specifics of the segment, we will look into some more columns of the view V$TRANSACTION:


SQL> select addr, xidusn, xidslot, xidsqn, ubafil, ubablk, ubasqn, ubarec,
  2  status, start_time, start_scnb, start_scnw, ses_addr
  3  from v$transaction;

ADDR         XIDUSN    XIDSLOT     XIDSQN     UBAFIL     UBABLK     UBASQN
-------- ---------- ---------- ---------- ---------- ---------- ----------
    UBAREC STATUS           START_TIME           START_SCNB START_SCNW SES_ADDR
---------- ---------------- -------------------- ---------- ---------- --------
3F063C48         10         25      31749          3        648       5639
        14 ACTIVE           12/30/10 20:00:25      35868240          0 40A73784


1 row selected.

The columns with names starting with UBA show the undo block address information. Look at the above output. The UBAFIL shows the file#, which is “3” in this case. Checking for the file_id:

SQL> select * from dba_data_files
  2> where file_id = 3;

FILE_NAME
-------------------------------------------------------------------------
   FILE_ID TABLESPACE_NAME                     BYTES     BLOCKS STATUS
---------- ------------------------------ ---------- ---------- ---------
RELATIVE_FNO AUT   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
------------ --- ---------- ---------- ------------ ---------- -----------
ONLINE_
-------
+DATA/d112d2/datafile/undotbs1.260.722742813
         3 UNDOTBS1                       4037017600     492800 AVAILABLE
           3 YES 3.4360E+10    4194302          640 4035969024      492672
ONLINE


1 row selected.


Note the UBASQN (which is the undo block sequence#) value in the earlier output, which was 5639. Let’s revisit the ITL entries in the dump of block:

Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000a.019.00007c05  0x00c00288.1607.0e  ----    1  fsc 0x0000.00000000
0x02   0x0003.017.00009e24  0x00c00862.190a.0f  C---    0  scn 0x0000.02234e2b


Look at the entry under the Uba column: 0x00c00288.1607.0e. As indicated by the “0x” at the beginning, these are in hexadecimal. Using a scientific calculator, let’s convert them. 1607 in hex means 5639 in decimal – the UBA Sequence# (UBASQN). The value “e” is 14 in decimal, which corresponds to the UBAREC. Finally the value 288 is 648 in decimal, which is the UBABLK. Now you see how the information is recorded in the block header and is also available to the DBA through the view V$TRANSACTION.

Let’s see some more important columns of the view. A typical database will have many sessions; not just one. Each session may have an active transaction, which means you have to link sessions to transactions to generate meaningful information. The transaction information also contains the session link. Note the column SES_ADDR, which is the address of the session that issued the transaction. From that, you can get the session information



SQL> select sid, username
  2  from v$session
  3  where saddr = '40A73784';

SID USERNAME
--- --------
123 ARUP


There you go – you now have the SID of the session. And now that you know the SID, you can look up any other relevant data on the session from the view V$SESSION.

Takeaways

Here is a summary of what you learned so far:

    Transaction in Oracle starts with a data update (or intention to update) statement. Actually there are some exceptions which we will cover in a later article.
    It ends when a commit or rollback is issued
    A transaction is identified by a transaction ID (XID) which is a set of three numbers – undo segment#, undo slot# and undo record# - separated by periods.
    You can view the transaction ID in the session itself by calling dbms_transaction.local_transaction_id function.
    You can also check all the active transactions in the view v$transaction, where the columns XIDUSN, XIDSLOT and XIDSQN denote the undo segment#, undo slot# and undo rec# - the values that make up the transaction ID.
    The transaction information is also stored in the block header. You can check it by dumping the block and looking for the term “Itl”.
    The v$transaction view also contains the session address under SES_ADDR column, which can be used to join with the SADDR column of v$session view to get the session details.
    From the session details, you can find out other actions by the session such as the username, the SQL issues, the machine issued from, etc.


I hope you found it useful. As always, I will be grateful to know how you liked it. In the next installment we will examine how these ITLs could cause issues and how to identify or resolve them.



Thanks,
NJ 


 

Friday 10 July 2015

Configure E-mail For Oracle Enterprise manager DB 11gR2

Regarding to oracle documentation :-


The notification system allows you to notify Enterprise Manager administrators of alerts, policy violations, and the status changes of job executions. In addition to notifying administrators, the notification system can perform actions such as executing operating system commands (including scripts) and PL/SQL procedures when an alert is triggered.

Before Enterprise Manager can send e-mail notifications, you must first specify the Outgoing Mail (SMTP) servers to be used by the notification system. Once set, you can then define e-mail notifications for yourself or, if you have Super Administrator privileges, you can also define notifications for other Enterprise Manager administrators.

For More about E-Mail Notification Read here.


If everything is ready please follow the screen shot :-

Welcome Screen/Login Page




The Main Page, Click On preference on Upper Right :-




Add the E-mail below :-






If you need to schedule any Rule Just add it like the below




















Thanks.
NJ


 

 
Oracle Database Patches Information


A number of questions has been asked here and in Oracle Community Forum When/What/Date for Next Database patches ?

in the all way you should have valid account for http://support.oracle.com.

I see it's worth it to post and share the information how to find the desire patch number and how to know next patch release, i will only share MOS document number that will help you understand what i mean.

How to Find Patch ?



    Oracle Recommended Patches -- Oracle Database (Doc ID 756671.1)
    Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets [1454618.1]
    Release Schedule of Current Database Releases [742060.1]
    Introduction to Oracle Recommended Patches [756388.1]
    Oracle Database Patchset Information, Versions 10.2.0 to 12.1.0 (Doc ID 268895.1)
    ALERT: Oracle 11g Release 1 (11.1) Support Status and Alerts (Doc ID 454507.1)


The above MOS document will guide you and help you to know the next release of database patch, also as you see i included some document that helps for recommended patch.

REGISTRY$HISTORY contains information about the applied patches PSU, SRU or CPU and you can use Oracle Base Script from here.

Or

$ORACLE_HOME/OPatch/opatch lsinventory

Thanks.
NJ
Shutdown Immediate Hangs

Today we faced Strange issue with Shutdown immediate command on cluster database which is when run shutdown immediate the command hangs and take so long and sometimes it's working and most of the time not working.

the below command was working without any problem :-

    SQL > shutdown abort

To solve this issue & trace the error since the alertlog not indicating to anything interesting we have to dump systemstate every 5 minutes  to know what going on :-


    connect / as sysdba
    alter session set events '10400 trace name context forever, level 1';


After this you can kill the session that preventing shutdown.

The hangs usually happened for two reasons :-

    Query running on Database.
    Transaction happened while you are trying to shutdown database.

For the first reason :

    use v$session_longops Read more here.

Run the below query to check the query.

    select count(*) from v$session_longops where time_remaining>0;


Second reason you can use

v$transaction Read from here.

Useful Document :-
Alert Log: Shutdown Waiting for Active Calls to Complete (Doc ID 1039389.6)

Thanks.
NJ
Configure DBConsole as Active/passive

Sometimes Database Configured as Active/Passive ( Cluster ) Which mean one node working/time and regarding to this if the failover happened on 2nd node the customer will expected that everything will be the same as it was on node 1 include to this Dbconsole.

To do this follow the below steps :-


    Database should be configured on VIP.
    Listener should be configured on VIP.
    Database , Configuration should be installed on Shared Storage
    inventory Files should be on shared storage as well.
    ORACLE_HOSTNAME Should be set to VIP Hostname.


Listener example :-


    LISTENER  =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = VIP-HOST )(PORT = 1521))
    )

Don't forget to set the LOCAL_LISTENER Parameter :-

    SQL> alter system set local_listener='LISTENER';


Or

    SQL> alter system set local_listener='(ADDRESS = (PROTOCOL = TCP)(HOST = VIP-HOST )(PORT = 1521)) ';

Now

    export ORACLE_HOSTNAME=VIP-HOSTNAME
    emca -config dbcontrol db -repos create


Test the failover on node #2

if for some reason when you try to start emctl and the error still asking for oc4j then export ORACLE_HOSTNAME is bash_profile.


Thanks.
NJ
New Features in ORAchk 12.1.0.2.3

The new ORAchk release 12.1.0.2.3 is now available to download from http://support.oracle.com

Is your database healthy ?

ORAchk can be used with any database depend on your Platform Solaris Intel, Solaris SPARC, ,HP , Linux , you can check the Oracle Support Document :-

ORAchk - Health Checks for the Oracle Stack (Doc ID 1268927.2)

That include the userGuide and how to Use ORAchk with any platforms,

New Features :-

Database performance, install, scalability & ASM
Cross stack checks for Oracle Applications running on Solaris & Oracle Hardware
Enterprise Manager Agents performance and failure to run
Oracle EBS Accounts Payables

Usually the usage like the below :-


    ./orachk  ---> regular healtcheck  + HA best practices
    ./orachk-c hacheck ---> HA best practices Only

 

Thanks.
NJ

 
Enterprise Manager : Performance Across Enterprise Databases


The below link is very to understand the performance under #EM12c Provided by oracle, This demonstration shows you how to use AWR Warehouse a new feature of Oracle Enterprise Manager 12c Release 4.

https://www.youtube.com/watch?v=StydMitHtuI&feature=youtu.be

Thanks.
NJ
Configure RMAN Backup On Windows :

Database is supported on windows and some people using Windows As Production and RMAN need to be configured to ensure backup for Database in such failure or something to do this follow the below steps :-


    Create first text file called backup and change the format to "bat", inside the file add the following lines :-

    C:
    cd E:\app\ADM-BSS01\product\11.2.0\dbhome_1\BIN
    rman @E:\RMAN\Scripts\backup.rman


    Create 3 folders in E for example :-
        backup
        Archivelog
        Controlfile

    Create second file name it Backup.rman and add the Rman Script like below

    connect target /
    run
    {
    ALLOCATE CHANNEL c1 DEVICE TYPE disk;
    ALLOCATE CHANNEL c2 DEVICE TYPE disk;
    ALLOCATE CHANNEL c3 DEVICE TYPE disk;
    ALLOCATE CHANNEL c4 DEVICE TYPE disk;
    backup AS COMPRESSED BACKUPSET tag Full_database_open format 'E:\RMAN\backup\db_%t_%s_p%p.bkp' database;
    backup AS COMPRESSED BACKUPSET archivelog all format 'E:\RMAN\backup\archivelog\al_%t_%s_p%p.arch' delete all input;
    backup AS COMPRESSED BACKUPSET current controlfile tag =cf1 format 'E:\RMAN\backup\cf_%t_%s_p%p.bkp';
    backup tag ORCL_CONTROL current controlfile format 'E:\RMAN\Controlfile\%d_%T_%s_%p_CONTROL';
    delete noprompt expired backup;
    release channel c1;
    release channel c2;
    release channel c3;
    release channel c4;
    }


  - Follow the screen now :-




    - Add Task Name and Description


    - Add The Time for this Task in this tab




    Another Screen :-




    In This Screen Add Bat File in our case backup.bat






Thanks.
NJ
LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1


LOG_ARCHIVE_DEST
is applicable only if we are running the database in ARCHIVELOG mode. LOG_ARCHIVE_DEST parameter are used to specified the archiving location. The Location specified by log_archive_dest must be local . We choose to archive only two local location  i.e,  primary and a secondary destination ( using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST ) .



LOG_ARCHIVE_DEST_n
initialization parameter defines up to ten (where n = 1, 2, ... 10) destinations in oracle 10g and thirty one (n=1,2....31)  destination in oracle 11g , each of which must specify either the  LOCATION or  the SERVICE  attribute to specify where to archive the redo data. All other attributes are optional.We set the attributes for the LOG_ARCHIVE_DEST_n initialization parameter to control different aspects of how redo transport services transfer redo data from a production or primary database destination to another (standby) database destination.For every LOG_ARCHIVE_DEST_n initialization parameter that we define, we must specify corresponding LOG_ARCHIVE_DEST_STATE_n parameter. The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter specifies whether the corresponding destination is currently enabled or disabled.


LOG_ARCHIVE_FORMAT

Syntax : LOG_ARCHIVE_FORMAT = 'log%t_%s_%r.arc'


is applicable only if you are using the redo log in ARCHIVELOG mode. Use a text string and variables to specify the default filename format when archiving redo log files. The string generated from this format is appended to the string specified in the LOG_ARCHIVE_DEST parameter.

The following variables can be used in the format:

%s log sequence number
%S log sequence number, zero filled
%tthread number
%Tthread number, zero filled
%a activation ID
%d database ID
%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database.

Reference Document :
1-Oracle Log_Archive_log Documentation.
2-Oracle Log_arcchive_Dest

Thanks.
NJ

What are V$Session_Wait VS V$Session_event?
 

 Both of these tables are view In database , we will talk about what is the difference between them and Why some of record appear in V$Session_wait but not in V$session_event , Simple Topic But useful and good to know , Also Check the document that i post For Oracle that gave you structure for both of these view :

V$SESSION_WAIT
displays the current or last wait for each session.

V$SESSION_EVENT
lists information on waits for an event by a session

This is simulation for both table appear what is the common between two view :

    Conn test/test
    
    select SID, EVENT from v$session_wait where event='DIAG idle wait';
    
    SID          EVENT
    5   DIAG idle wait
    8   DIAG idle wait
    
    select SID,EVENT from v$session_event where event ='DIAG idle wait' ;
    
    SID          EVENT
    5   DIAG idle wait
    8   DIAG idle wait

Also its good to know when you are using 10g or later you can use v$seesion which is  gives you real-time information, what is happening right now.


gives you real-time information, what is happening right now :


there's different type of enqueue:wait in Oracle like the following :


    enq: TX - allocate ITL entry

    enq: TX - contention

    enq: TX - index contention

    enq: TX - row lock contention




to check them you can query V$EVENT_NAME view provides a

complete list of all the enq: wait events.


But in V$session_wait you can check the following :


     P1: Lock TYPE (or name) and MODE



    P2: Resource identifier ID1 for the lock



    P3: Resource identifier ID2 for the lock


 Which is not found in v$session_event .

So We can say :

V$SESSION_WAIT
displays the events for which sessions have just completed waiting or are currently waiting.
V$SESSION_EVENT
is similar to V$SYSTEM_EVENT, but displays all waits for each session.

 Reference Document :
1-V$session_wait
2-V$session_event
3-Events


Thanks.
NJ
How to check Session memory For a Oracle User

To check the Memory Usage for Each Seesion :

    select username,name,value
    from v$session join v$sesstat using (sid)
    join v$statname using (statistic#)
    where name = 'session pga memory' and username='';


Example :

USERNAME                   NAME                                  VALUE
-----------------------------------------------------------      ----------
SCDB                       session pga memory                    699884

SCDB                       session pga memory                    651376

Thanks.
NJ
Multiple DataFiles and Multiple Tablespaces

As Any Database administrator you are managing your database check the size of your tablespace and datafiles , try to get better performance But why we are using Mutliple Tablespace and datafiles :

Advantage multiple tablespaces :

Control disk space allocation for database data
Assign specific space quotas for database users
Control availability of data by taking individual tablespaces online or offline
Perform partial database backup or recovery operations
Allocate data storage across devices to improve performance

The Same For Multiple DataFiles :
put each data file on a separate disk array. This decreases contention between disks.

Thanks.
NJ
ORA-01102: cannot mount database in EXCLUSIVE mode

Error happened when try open database :

    SQL> startup
    ORACLE instance started.

    Total System Global Area 267227136 bytes
    Fixed Size 2212496 bytes
    Variable Size 205524336 bytes
    Database Buffers 54525952 bytes
    Redo Buffers 4964352 bytes
    ORA-01102: cannot mount database in EXCLUSIVE mode



 In alert.log

    sculkget: lock held by PID: 12359
    ORA-09968: unable to lock file
    Linux-x86_64 Error: 11: Resource temporarily unavailable
    Additional information: 12359
    ORA-1102 signalled during: ALTER DATABASE MOUNT...


On OS level :

    [oracle@localhost trace]$ ps -ef | grep pmon

    oracle 10222 1 0 10:27 ? 00:00:00 ora_pmon_ORCL
    oracle 2522 1 0 Nov09 ? 00:00:06 ora_pmon_orcl

Use :

 Kill -9 10222 2522

Then
1-After that fire export ORACLE_SID= (watch case sensitive )
2-sqlplus / as sysdba
3-startup ;

Thanks.
NJ


Register Listener In Database :


Listener listens to new connections who is trying to connect to DB server. If the listener goes down, new users would not be able to connect to DB server. But still, already connected users would be able to do their work normally.

Listener   waiting  requests  from Clients to connect to the Instance. By default, the Listener name is (amazingly enough)“Listener” (but you can call it anything you like). It listens for connection requests on a particular port (the default port number in 8.0 and above is 1521, but once again you canset this to any valid port number if you wish). A client knows where to contact the Listener (the machine it’s running on, and the port it’s listening on) because  a local configuration file, called “tnsnames.ora”, gives it the necessary information. More advanced configurations can dispense with the  tnsnames.ora .



How  to  Register  Listener  In  Database 

1.) Static Instance Registration
2.) Dynamic Instance Registration



Lets Discuss These Method and Start With Static Instance Registration :

Its basic method , and use $ORACLE_HOME\NETWORK\ADMIN\listener.ora its look like

        LISTENER =
          (DESCRIPTION =
            (ADDRESS = (PROTOCOL = TCP)(HOST = xx.xx.xx.xx)(PORT = 1521))
          )
         

And When You Fire lsnrctl status , instance Name Appear with Unknown like the following :

    Service "orcl" has 1 instance(s).
      Instance "orcl", status UNKNOWN, has 1 handler(s) for this service...
    Service "orcl" has 1 instance(s).
      Instance "orcl", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully


The status is unknown because there is no mechanism to guarantee that the specified status even exists.Here the listener assumes that instance will be there whenever there will be any request. It donot have inforamtion about the status of the Current Instance.

Dynamic Instance Registration (service registration):
 
in this Way the PMON is Responsible about Register Listener ,

Benefit for this way like the following :


1.) Simplified configuration  :  Service registration reduces the need for the SID_LIST_listener_name parameter setting, which specifies information about the databases served by the listener, in the listener.ora file.

Note :  The SID_LIST_listener_name parameter is still required if we are using Oracle Enterprise Manager to manage the database.

2.) Connect-time fail over  : Because the listener always knows the state of the instances, service registration facilitates automatic fail over of the client connect request to a different instance if one instance is down.
In a static configuration model, a listener would start a dedicated server upon receiving a client request. The server would later find out that the instance is not up, causing an "Oracle not available" error message.

3.) Connection load balancing : Service registration enables the listener to forward client connect requests to the least loaded instance and dispatcher or dedicated server. Service registration balances the load across the service handlers and nodes.  To ensure service registration works properly .

    Services Summary...
    Service "ORCL" has 1 instance(s).
      Instance "ORCL", status READY, has 1 handler(s) for this service...
    Service "ORCLXDB" has 1 instance(s).
      Instance "ORCL", status READY, has 1 handler(s) for this service...
    Service "ORCL2" has 1 instance(s).
      Instance "ORCL2", status READY, has 1 handler(s) for this service...
    The command completed successfully

 To register Database name with listener in this way you could use :

    SQL> ALTER SYSTEM REGISTER;


Thanks.
NJ
SGA_MAX_SIZE & SGA_TARGET / MEMORY_TARGET & MEMORY_MAX_TARGET
 


In this article we will discuss the Difference between these parameter and how to use them ,  MEMORY_TARGET & MEMORY_MAX_TARGET  parameters appear in 11g .


I will depend on oracle documentation to describe these parameter :

SGA_MAX_SIZE sets the overall amount of memory the SGA can consume but is not dynamic.

The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.

MEMORY_TARGET & MEMORY_MAX_TARGET

you can manage SGA and PGA together rather than managing them separately.

If you set SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET to 0 and set MEMORY_TARGET (and optionally MEMORY_MAX_TARGET) to non zero value, Oracle will manage both SGA components and PGA together within the limit specified by you.

If MEMORY_TARGET is set to 1024MB, Oracle will manage SGA and PGA components within itself.

If MEMORY_TARGET is set to non zero value:

    SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET are set to 0, 60% of memory mentioned in MEMORY_TARGET is allocated to SGA and rest 40% is kept for PGA.
    SGA_TARGET and PGA_AGGREGATE_TARGET are set to non-zero values, these values will be considered minimum values.
    SGA_TARGET is set to non zero value and PGA_AGGREGATE_TARGET is not set. Still these values will be autotuned and PGA_AGGREGATE_TARGET will be initialized with value of (MEMORY_TARGET-SGA_TARGET).
    PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set. Still both parameters will be autotunes. SGA_TARGET will be initialized to a value of (MEMORY_TARGET-PGA_AGGREGATE_TARGET).



You can have some Description  On http://support.oracle.com


Thanks.
NJ
Dealing With Oracle Traces :


This article using Oracle Database 11g.

How Could I change name for Oracle Trace :

alter session set tracefile_identifier = 'some_id';

    SQL> alter session set tracefile_identifier = 'test';

    Session altered.

    SQL> oradebug tracefile_name

    /u01/app/oracle/admin/orcl/udump/orcl_ora_6207_test.trc

maximum size of Oracle Trace File

-By Set max_dump_file_size parameter
- alter session set max_dump_file_size = unlimited;

Finding Oracle Trace File for current session :

     SELECT value
    FROM v$diag_info
    WHERE name = 'Default Trace File';

Finding Oracle Trace File for Current Database Process

     SELECT pid, program, tracefile
    FROM v$process;

To find all trace files for the current instance:

     SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

and you could use ADRCI features.

Thanks.
NJ
How To Change Listener Default Name?



Step One :

ps -ef | grep tns

    oracle    4214     1  0 22:51 ?        00:00:00 /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr LISTENER -inherit



Step two :

lsnrctl stop LISTENER

    [oracle@localhost ~]$ lsnrctl stop

    LSNRCTL for Linux: Version 10.2.0.5.0 - Production on 17-NOV-2012 22:55:41

    Copyright (c) 1991, 2010, Oracle.  All rights reserved.

    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    The command completed successfully



Step Three :

Go to $ORACLE_HOME/network/admin and modify Listener.ora

[oracle@localhost ~]$ cd /u01/app/oracle/product/10.2.0/db_1/network/admin/





SID_LIST_LISTENER1=
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
      (PROGRAM = extproc)
    )
  )

LISTENER1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))
    )
  )


Step Four :

lsnrctl start LISTENER1


Step Five :

SQL> show parameter local_listener

SQL> alter system set local_listener='(address=(protocol=tcp)(host=xx.xx.xx.xx)(port=1521))';
SQL> alter system register;


Thanks.
NJ
WARNING: Subscription for node down event still pending


This is something interesting that I found on one of the searches.

Warning Appear like the following :


[oracle@sun3 ~]$ cat /u01/app/oracle/product/10.2.0/network/log/listener1.log

19-NOV-2012 14:00:19 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl1)(CID=(PROGRAM=sqlplus@sun3)(HOST=sun3)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.4)(PORT=53680)) * establish * orcl1 * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor
19-NOV-2012 14:01:02 * ping * 0
19-NOV-2012 14:08:34 * service_update * orcl1 * 0
19-NOV-2012 14:18:37 * service_update * orcl1 * 0
WARNING: Subscription for node down event still pending
19-NOV-2012 14:23:32 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=sun3)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169869568)) * status * 0
19-NOV-2012 14:23:59 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl1)(CID=(PROGRAM=sqlplus@sun3)(HOST=sun3)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.4)(PORT=53505)) * establish * orcl1 * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor
19-NOV-2012 14:28:40 * service_update * orcl1 * 0
19-NOV-2012 14:30:38 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl1)(CID=(PROGRAM=sqlplus@sun3)(HOST=sun3)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.4)(PORT=5290)) * establish * orcl1 * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor
19-NOV-2012 14:38:43 * service_update * orcl1 * 0
WARNING: Subscription for node down event still pending



Solution 



In Listener.ora add the following :

SUBSCRIBE_FOR_NODE_DOWN_EVENT_=OFF


Thanks.
NJ
Convert BLOB to CLOB? How?


We took this procedure from OTN forum  to convert BLOB to CLOB


    CREATE OR REPLACE FUNCTION blob_to_clob (blob_in IN BLOB)
    RETURN CLOB
    AS
    v_clob CLOB;
    v_varchar VARCHAR2(32767);
    v_start PLS_INTEGER := 1;
    v_buffer PLS_INTEGER := 32767;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(v_clob, TRUE);

    FOR i IN 1..CEIL(DBMS_LOB.GETLENGTH(blob_in) / v_buffer)
    LOOP

    v_varchar := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(blob_in, v_buffer, v_start));

    DBMS_LOB.WRITEAPPEND(v_clob, LENGTH(v_varchar), v_varchar);
    v_start := v_start + v_buffer;
    END LOOP;
    RETURN v_clob;
    END blob_to_clob;
    /

 Example :
Select blob_to_clob(blob_column) from table_name;


Thanks.
NJ
What does Oracle Version Digit Mean?

Have you ever though about this?



The first digit is the most general identifier. It represents a major new version of the software that contains significant new functionality.
Database Maintenance Release Number

The second digit represents a maintenance release level. Some new features may also be included.
Application Server Release Number

The third digit reflects the release level of the Oracle Application Server (OracleAS).
Component-Specific Release Number

The fourth digit identifies a release level specific to a component. Different components can have different numbers in this position depending upon, for example, component patch sets or interim releases.
Platform-Specific Release Number

The fifth digit identifies a platform-specific release. Usually this is a patch set. When different platforms require the equivalent patch set, this digit will be the same across the affected platforms.


Thanks.
NJ