Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

Posts Tagged ‘oracle performance’

Is plan_hash_value a final say?

Posted by Riyaj Shamsudeen on September 13, 2009

I was reviewing a performance issue with a client recently. Problem is that increased global cache waits causing application slowdown affecting few critical business functions. Using one of my script gc_traffic.sql > and graphing the results with Excel spreadsheet, it is established that there is a marked increase in GC traffic today compared to week earlier. Similar jobs runs every day and so comparing two week days is sufficient to show the GC traffic increase. Graph is between total blocks and AWR snap time in 30 minutes interval. [Click the picture below to review the graph clearly.]

Identifying the object creating this increased GC traffic is essential to identify root cause. We were able to quickly determine that this increase in GC traffic was localized around few SQL statements using ADDM and AWR reports. We decided to focus on one SQL with an obvious increase in elapsed time compared to prior week. So, first question asked, is there a change in the plan? plan_hash_value was reviewed and quickly determined that there is no change in the plan_hash_value.

Keep Reading

Posted in CBO, Performance tuning, RAC, Uncategorized | Tagged: , , , , , | 7 Comments »

Shared pool freelists (and durations)

Posted by Riyaj Shamsudeen on August 9, 2009

My earlier blog about shared pool duration got an offline response from one of my reader:
” So, you say that durations aka mini-heaps have been introduced from 10g onwards. I have been using Steve Adams’ script shared_pool_free_lists.sql. Is that not accurate anymore?”

</p

Shared pool free lists

I have a great respect for Steve Adams . In many ways, he has been a great virtual mentor and his insights are so remarkable.

Coming back to the question, I have used Steve’s script before and it is applicable prior to Oracle version 9i. In 9i, sub-heaps were introduced. Further, shared pool durations were introduced in Oracle version 10g. So, his script may not be applicable from version 9i onwards. We will probe this further in this blog.

This is the problem with writing anything about internals stuff, they tend to change from version to version and In many cases, our work can become obsolete in future releases(including this blog!).

Keep Reading

Posted in 11g, Oracle database internals, Performance tuning, shared_pool | Tagged: , , , , , , , , , , , | 2 Comments »

ORA-4031 and Shared Pool Duration

Posted by Riyaj Shamsudeen on August 6, 2009

After reading my earlier post on shared pool A stroll through shared pool heap , one of my client contacted me with an interesting ORA-4031 issue. Client was getting ORA-4031 errors and shared pool size was over 4GB ( in a RAC environment). Client DBA queried v$sgastat to show that there is plenty of free memory in the shared pool. We researched the issue and it is worth blogging. Client DBA was confused as to how there can be ORA-4031 errors when the shared pool free memory is few GBs.

Heapdump Analysis

At this point, it is imperative to take heapdump in level 2 and Level 2 is for the shared pool heap dump. [ Please be warned that it is not advisable to take shared pool heap dumps excessively, as that itself can cause performance issue. During an offline conversation, Tanel Poder said that heapdump can freeze instance as his clients have experienced.]. This will create a trace file in user_dump_dest destination and that trace file is quite useful in analyzing the contents of shared pool heap. Tanel Poder has an excellent script heapdump_analyzer . I modified that script adding code for aggregation at hea, extent and type levels to debug this issue further and it is available as heapdump_dissect.ksh . ( with a special permission from Tanel to publish this script.)

Shared pool review

You can read much more about shared pool in my earlier blog entry posted above. Just as a cursory review, shared pool is split in to multiple sub heaps. In 10g, each of those sub heaps are divided in to even smaller sub heaps, let’s call it mini-heaps. For example, in this specific database, there are three sub heaps. Each of those sub heaps are further split in to four mini-heaps (1,0), (1,1), (1,2) and (1,3) each.

	sga heap(1,0)
	sga heap(1,1)
	sga heap(1,2)
	sga heap(1,3)

	sga heap(2,0)
	sga heap(2,1)
	sga heap(2,2)
	sga heap(2,3)

	sga heap(3,0)
	sga heap(3,1)
	sga heap(3,2)
	sga heap(3,3)	

Keep Reading

Posted in Oracle database internals, Performance tuning, RAC | Tagged: , , , , , , , , , | 14 Comments »

RAC, parallel query and udpsnoop

Posted by Riyaj Shamsudeen on June 20, 2009

I presented about various performance myths in my ‘battle of the nodes’ presentation. One of the myth was that how spawning parallel query slaves across multiple RAC instances can cause major bottleneck in the interconnect. In fact, that myth was direct result of a lessons learnt presentation from a client engagement. Client was suffering from performance issues with enormous global cache waits running in to 30+ms average response time for global cache CR traffic and crippling application performance. Essentially, their data warehouse queries were performing hundreds of parallel queries concurrently with slaves spawning across three node RAC instances.

Of course, I had to hide the client details and simplified using a test case to explain the myth. Looks like either a)my test case is bad or b) some sort of bug I encountered in 9.2.0.5 version c) I made a mistake in my analysis somewhere. Most likely it is the last one :-(. Greg Rahn questioned that example and this topic deserves more research to understand this little bit further. At this point, I don’t have 9.2.0.5 and database is in 10.2.0.4 and so we will test this in 10.2.0.4.

udpsnoop

UDP is one of the protocol used for cache fusion traffic in RAC and it is the Oracle recommended protocol. In this article, UDP traffic size must be measured. Measuring Global cache traffic using AWR reports was not precise. So, I decided to use a dtrace tool kit tool:udpsnoop.d to measure the traffic between RAC nodes. There are two RAC nodes in this setup. You can read more about udpsnoop.d. That tool udpsnoop.d can be downloaded from dtrace toolkit . Output of this script is of the form:

PID        LADDR           LPORT           DR         RADDR           RPORT                 SIZE
---------- --------------- --------------- ---------- --------------- --------------- -----------
15393      1.1.59.192      38395           ->         2.1.59.192      40449                 8240
...

Keep Reading

Posted in Oracle database internals, Performance tuning, RAC | Tagged: , , , , , , , , , , | 15 Comments »

Library cache lock and library cache pin waits

Posted by Riyaj Shamsudeen on June 2, 2009

I encountered few customer issues centered around library cache lock and library cache pin waits. Library cache lock and pin waits can hang instance, and in few cases, whole clusters of RAC instances can be hung due to library cache lock and pin waits.

Why Library cache locks are needed?

Library cache locks aka parse locks are needed to maintain dependency mechanism between objects and their dependent objects like SQL etc. For example, if an object definition need to be modified or if parse locks are to be broken, then dependent objects objects must be invalidated. This dependency is maintained using library cache locks. For example, if a column is dropped from a table then all SQLs dependent upon the table must be invalidated and reparsed during next access to that object. Library cache locks are designed to implement this tracking mechanism.

In a regular enqueue locking scenarios there is a resource (example TM table level lock) and sessions enqueue to lock that resource. More discussion on enqueue locking can be found in Internal of locks. Similarly, library cache locks uses object handles as resource structures and locks are taken on that resource. If the resources are not available in a compatible mode, then sessions must wait for library cache objects to be available.

Why Library cache pins are needed?

Library cache pins deals with current execution of dependent objects. For example, an underlying objects should not be modified when a session is executing or accessing a dependent object. So, before parse locks on a library cache object can be broken, library cache pins must be acquired in an Exclusive mode and then only library cache objects can be dropped. If a session is executing a stored object, then the library cache pins will not be available and there will be waits for library cache pins. Typically, this happens for a long running statement executing a stored object.

x$kgllk, x$kglpn and x$kglob

Library cache locks and pins are externalized in three x$ tables. x$kgllk is externalizing all locking structures on an object. Entries in x$kglob acts as a resource structure. x$kglpn is externalizing all library cache pins.

x$kglob.kglhdadr acts as a pointer to the resource structure. Presumably, kglhdadr stands KGL handle address. x$kgllk acts as a lock structure and x$kgllk.kgllkhdl points to x$kglob.kglhdadr. Also, x$kglpn acts as a pin stucture and x$kglpn.kglpnhdl points to x$kglob.kglhdadr to pin a resource. To give an analogy between object locking scenarios, x$kglob acts as resource structure and x$kgllk acts as lock structures for library cache locks. For library cache pins, x$kglpn acts as pin structure. x$kglpn also pins that resource using kglpnhdl. This might be clear after reviewing the example below.

Test case

We will create a simple test case to create library cache locks and pin waits

create or replace procedure backup.test_kgllk (l_sleep in boolean , l_compile in boolean)
as
 begin
  if (l_sleep ) then
	sys.dbms_lock.sleep(60);
  elsif (l_compile )  then
  	execute immediate 'alter procedure test_kgllk compile';
  end if;
 end;
/

In this test case above, we create a procedure and it accepts two boolean parameters: sleep and compile. Passing true to first argument will enable the procedure to sleep for a minute and passing true for the second argument will enable the procedure to recompile itself.

Let’s create two sessions in the database and then execute them as below.

Session #1: exec test_kgllk ( true, false); — Sleep for 1 minutes and no compile
Session #2: exec test_kgllk ( false, true); — No sleep,but compile..

At this point both sessions are waiting. Following SQL can be used to print session wait details.

select
 distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,ses.ksuseunm machine,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
   ,pn.kglpncnt pin_cnt, pn.kglpnmod pin_mode, pn.kglpnreq pin_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
   -- lk.kglnaobj, lk.user_name, lk.kgllksnm,
   --,lk.kgllkhdl,lk.kglhdpar
   --,trim(lk.kgllkcnt) lock_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req,
   --,lk.kgllkpns, lk.kgllkpnc,pn.kglpnhdl
 from
  x$kglpn pn,  x$kglob ob,x$ksuse ses 
   , v$session_wait w
where pn.kglpnhdl in
(select kglpnhdl from x$kglpn where kglpnreq >0 )
and ob.kglhdadr = pn.kglpnhdl
and pn.kglpnuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

Output of above SQL is:

                                                                pin  pin  pin                                 wait seconds
  SID   SERIAL# USERNAME     MACHINE   OBJ_OWNER  OBJ_NAME      cnt  mode req  STATE      EVENT               time in_wait
----- --------- ------------ --------- ---------- ------------- ---- ---- ---- ---------- ------------------- ----- -------
  268     12409 SYS          orap      SYS        TEST_KGLLK    3    2    0    WAITING    PL/SQL lock timer       0       7
  313     45572 SYS          orap      SYS        TEST_KGLLK    0    0    3    WAITING    library cache pin       0       3
  313     45572 SYS          orap      SYS        TEST_KGLLK    3    2    0    WAITING    library cache pin       0       3

  1. Session 268 (session #1) is sleeping while holding library cache pin on test_kgllk object (waiting on PL/SQL lock timer more accurately).
  2. Session 313 is holding library cache pin in mode 2 and waiting for library cache pin in mode 3.

Obviously, session 313 is waiting for session 268 to release library cache pins. Since session 268 is executing, session 313 should not be allowed to modify test_kgllk library cache object. That’s exactly why library cache pins are needed.

Adding another session to this mix..

Let’s add one more session as below

 
exec test_kgllk (false, true);

Output of above query is:

 
                                                                                   pin  pin  pin                                            wait seconds
  SID   SERIAL# USERNAME     MACHINE              OBJ_OWNER  OBJ_NAME              cnt mode  req STATE      EVENT                           time in_wait
----- --------- ------------ -------------------- ---------- -------------------- ---- ---- ---- ---------- ------------------------------ ----- -------
  268     12409 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    PL/SQL lock timer                  0      34
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              0    0    3 WAITING    library cache pin                  0      29
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    library cache pin                  0      29
  442      4142 SYS          oraperf              SYS        TEST_KGLLK              0    0    2 WAITING    library cache pin                  0       3

Well, no surprise there. New session 442 also waiting for library cache pin. But, notice the request mode for session 442. It is 2. Session 442 needs that library cache pin in share mode to start execution. But 313 has already requested that library cache pin in mode 3. A queue is building up here. Many processes can queue behind session 313 at this point leading to an hung instance.

library cache locks..

Let’s execute same package but both with same parameters.

 Session #1: exec test_kgllk(false, true);
 Session #2: exec test_kgllk(false, true);

Rerunning above query tells us that session 313 is waiting for the self. Eventually, this will lead library cache pin self deadlock.


Library cache pin holders/waiters
---------------------------------
                                                                                   pin  pin  pin                                            wait seconds
  SID   SERIAL# USERNAME     MACHINE              OBJ_OWNER  OBJ_NAME              cnt mode  req STATE      EVENT                           time in_wait
----- --------- ------------ -------------------- ---------- -------------------- ---- ---- ---- ---------- ------------------------------ ----- -------
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              0    0    3 WAITING    library cache pin                  0      26
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    library cache pin                  0      26

Wait, what happened to session #2? It is not visible in x$kglpn. Querying v$session_wait shows that Session #2 is waiting for library cache lock. We will run yet another query against x$kgllk to see library cache lock waits.

 
  Querying x$kgllk with the query below: 
select
 distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,KSUSEMNM module,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
   ,lk.kgllkcnt lck_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
 from
  x$kgllk lk,  x$kglob ob,x$ksuse ses
  , v$session_wait w
where lk.kgllkhdl in
(select kgllkhdl from x$kgllk where kgllkreq >0 )
and ob.kglhdadr = lk.kgllkhdl
and lk.kgllkuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

Library cache lock holders/waiters
---------------------------------
                                                                                   lock lock                                            wait seconds
  SID   SERIAL# USERNAME     MODULE     OBJ_OWNER  OBJ_NAME                LCK_CNT mode  req STATE      EVENT                           time in_wait
----- --------- ------------ ---------- ---------- -------------------- ---------- ---- ---- ---------- ------------------------------ ----- -------
  313     45572 SYS          wsqfinc1a  SYS        TEST_KGLLK                    1    1    0 WAITING    library cache pin                  0      29
  313     45572 SYS          wsqfinc1a  SYS        TEST_KGLLK                    1    3    0 WAITING    library cache pin                  0      29
  268     12409 SYS          wsqfinc1a  SYS        TEST_KGLLK                    0    0    2 WAITING    library cache lock                 0      12
  268     12409 SYS          wsqfinc1a  SYS        TEST_KGLLK                    1    1    0 WAITING    library cache lock                 0      12

Session 313 is holding library cache lock on that object in mode 3 and session 268 is requesting lock on that library cache object in mode 2. So, session 268 is waiting for library cache lock while session 313 is waiting for library cache pin (self ). Again, point here is that session 268 is trying to access library cache object and need to acquire library cache lock in correct mode. That library cache lock is not available leading to a wait.

Complete script can be downloaded from my script archive.

RAC, library cache locks and pins

Things are different in RAC. Library cache locks and pins are global resources controlled by GES layer. So, these scripts might not work if these library cache lock and pin waits are global events. Let’s look at what happens in a RAC environment

exec test_kgllk ( false, true); — node 1
exec test_kgllk ( false, true); — node 2

In node1, only one session is visible.

Library cache pin holders/waiters
----------------------------------
                                                                                   pin  pin  pin                                            wait seconds
  SID   SERIAL# USERNAME     MACHINE              OBJ_OWNER  OBJ_NAME              cnt mode  req STATE      EVENT                           time in_wait
----- --------- ------------ -------------------- ---------- -------------------- ---- ---- ---- ---------- ------------------------------ ----- -------
  268     12409 SYS          oraperf              SYS        TEST_KGLLK              0    0    3 WAITING    library cache pin                  0      18
  268     12409 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    library cache pin                  0      18

 In node 2, only requestor of the lock is visible. 

lock lock wait seconds SID SERIAL# USERNAME MODULE OBJ_OWNER OBJ_NAME LCK_CNT mode req STATE EVENT time in_wait ----- --------- ------------ ---------- ---------- -------------------- ---------- ---- ---- ---------- ------------------------------ ----- ------- 377 43558 SYS wsqfinc2a SYS TEST_KGLLK 0 0 2 WAITING library cache lock 0 86

Essentially, this script does not work in a RAC environment since it accesses x$ tables directly, which are local to an instance. To understand the issue in a RAC environment we need to access gv$ views, based on x$kgllk, x$kglpn etc. But, I don’t see gv$ views over these x$ tables. We are out of luck there unless we do some more coding.

Nevertheless, we can see lockers and waiters accessing gv$ges_blocking_enqneue to understand locking in RAC.

  1  select inst_id, handle, grant_level, request_level, resource_name1, resource_name2, pid , transaction_id0, transaction_id1
  2* ,owner_node, blocked, blocker, state from gv$ges_blocking_enqueue
SQL> /

   INST_ID HANDLE           GRANT_LEV REQUEST_L RESOURCE_NAME1                 RESOURCE_NAME2                        PID
---------- ---------------- --------- --------- ------------------------------ ------------------------------ ----------
TRANSACTION_ID0 TRANSACTION_ID1 OWNER_NODE    BLOCKED    BLOCKER
--------------- --------------- ---------- ---------- ----------
STATE
----------------------------------------------------------------
         2 00000008DD779258 KJUSERNL  KJUSERPR  [0x45993b44][0x3a1b9eee],[LB]  1167670084,974888686,LB              8700
              0               0          1          1          0
OPENING

         1 00000008E8123878 KJUSEREX  KJUSEREX  [0x45993b44][0x3a1b9eee],[LB]  1167670084,974888686,LB             12741
              0               0          0          0          1
GRANTED


We can see that PID 12741 from instance 1 is holding a library cache global lock [LB]. Global resource in this case is [0x45993b44][0x3a1b9eee],[LB] which uniquely identifies a library cache object at the cluster level. Grant_level is KJUSEREX or Exclusive level and request_level from node 2 is KJUSERPR which is Protected Read level. PID 8700 in node 2 is waiting for library cache lock held by PID 12741 in node1. Using this output and our script output, we can pin point which process is holding library cache lock or pin. While Library cache locks are globalized as global locks in the range of [LA] – [LZ], Library cache pins are also globalized as lock types in the range [NA]-[NZ].

This blog can be read in a document format from
Library_cache_locks_and_library_cache_pin_waits
Update #1: Updated broken links.
Update #2: Updated verbatim after a reader comment.

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , | 33 Comments »

COLLABORATE 2009 presentation: 11g performance new feature.

Posted by Riyaj Shamsudeen on May 6, 2009

I just presented about Oracle 11g new features specific to performance in COLLABORATE 2009, Orlando Florida. You can download presentation and paper from here:

11g performance specific new features – presentation
11g performance specific new features – paper

I met couple of new friends and many familiar faces.

Catherin Devlin introduced sqlpython tool to me. This tool is an user friendly replacement for sqlplus with many UNIX like goodies. This tool will be an useful addition to command line tools. You can see installation instructions for sqlpython here.

I also attended an interesting application performance panel discussion with fellow Oakie Mark W Farnheim, Mike Brown and Sandra Vucinic. We discussed few customer issues.

An interesting problem discussion is worth mentioning. A client clones production database to development database using full rapidclone methodology. But, access plans for a SQL statement is different between production and development, even though everything was exactly the same. I am hoping, that client will send 10053 trace files from both databases and I will blog about it if I get anything from that client. I think, bind peeking is causing the plan difference, but need to analyze 10053 trace file to confirm that. Another possibility is that, may be , he is not cloning Oracle Software and some software patches are not existing in cloned environment. Need trace files to proceed further here.

Of course, I met few other friends Jeremy Schneider, Dan Norris to mention few.

Posted in 11g, CBO, EBS11i, Oracle database internals, Performance tuning, Presentations | Tagged: , , , , , , , , , , | Leave a Comment »

DOUG presentation: Why does optimizer hate my SQL?

Posted by Riyaj Shamsudeen on April 18, 2009

I presented about Cost based optimizer explaining why some times CBO chose inefficient access plan, even though, there is an efficient plan in the search space. This entry is to post presentation slides and they can be downloaded from Why_optimizer_hates_my_sql

.

Update: Updated presentation after Greg’s comments.
Update2: Further bug fixes in presentation and script.
Update3: Updates after Randolf’s comments.

Posted in 11g, CBO, Performance tuning, Presentations | Tagged: , , , , , | 8 Comments »

Dynamic_plan_table, x$kqlfxpl and extreme library cache latch contention

Posted by Riyaj Shamsudeen on March 13, 2009

We had an interesting latch contention issue with a client database worth sharing. Client complained that they are seeing increased library cache latch contention in production with no logical explanation (yet). So, client wanted us to find the root cause. This performance issue seems to have started recently and occurs intermittently and also resolves by itself. Client is planning to upgrade to 10g in 4 weeks time frame and this issue need to be resolved to avoid any delays in 10g upgrade.

From statspack analysis and various statistics analysis, it is visible that there is an increase in library cache latch contention, but nothing jumps up as a problem. Of course, this is a very active application [ 5 node RAC application] with very high number of literal SQLs but that is nothing new, it is always been this way. No new dynamic SQLs, no DDLs invalidating cursors or anything of that sort. I started watching production instance, in real time, waiting for the re-occurrence of this issue. It occurred!

latchprofx for rescue

At this very juncture, instance was suffering from latch contention and It was visible that sessions were starting to queue up for latches.

I was ready with necessary tools. Tanel Poder has written a great tool latchprofx to sample v$latch_holder with high frequency [ Actually, it uses x$ tables but you get the idea ]. In this case, high frequency sampling is a must since this performance problem disappears in few minutes or so. Executed latcprofx script for library cache latches and 10000 samples as printed below.

SQL>  @/tmp/latchprofx  sid,name % library 10000

-- LatchProfX 1.06 by Tanel Poder ( http://www.tanelpoder.com )

  SID NAME                                      Held  Held %     Held ms
------ ----------------------------------- ---------- ------- -----------
   140 library cache                             9555   95.55   25167.870
   185 library cache                              198    1.98     521.532
   245 library cache                               97     .97     255.498
   185 library cache pin                           69     .69     181.746
   245 library cache pin                           39     .39     102.726
   240 library cache                               33     .33      86.922
   240 library cache pin                            6     .06      15.804
   227 library cache                                2     .02       5.268
   227 library cache pin                            1     .01       2.634

Yikes. We can see that one session holding library cache latch in 95.5% of samples. That seems bit odd.

Of course, there are many latch children for library cache latches. Is that session repeatedly acquiring just one latch children? Or is it acquiring many different latch children? More importantly what is that session executing ?

Latchprofx to the rescue again. Latchprofx can also print latch children address. Re-executing latcprofx script with modified second argument to the call. Only difference between these two calls is that first argument to latchprofx includes laddr now.


@/tmp/latchprofx  sid,name,laddr % library 10000

   SID NAME                                LADDR                  Held  Held %     Held ms
------ ----------------------------------- ---------------- ---------- ------- -----------
   140 library cache                       0000000990FEFA18        298    2.98     783.442
   140 library cache                       0000000990FED2B8        270    2.70     709.830
   140 library cache                       0000000990FEF298        254    2.54     667.766
   140 library cache                       0000000990FEF658        254    2.54     667.766
   140 library cache                       0000000990FEECF8        240    2.40     630.960
   140 library cache                       0000000990FF0198        236    2.36     620.444
   140 library cache                       0000000990FEE488        225    2.25     591.525
   140 library cache                       0000000990FEFCE8        223    2.23     586.267
   140 library cache                       0000000990FEEED8        222    2.22     583.638
   140 library cache                       0000000990FEF388        222    2.22     583.638
   140 library cache                       0000000990FEDEE8        220    2.20     578.380
   140 library cache                       0000000990FF00A8        216    2.16     567.864
   140 library cache                       0000000990FEE758        215    2.15     565.235
   140 library cache                       0000000990FEEFC8        212    2.12     557.348
   140 library cache                       0000000990FEE938        211    2.11     554.719
   140 library cache                       0000000990FEE668        205    2.05     538.945
   140 library cache                       0000000990FEDB28        201    2.01     528.429
....

Whoa! This session was acquring nearly all latch children. Notice that latch addresses are different in every row output. Meaning, various samples of latchprofx saw that this session is holding a different latch children. In a nutshell, this session was acquiring and releasing ALL library cache latches ( aka children) in quasi sequential fashion and causing massive performance issue. There are 53 latch children for library cache latch (in this database). Column held% is approximately 2%, indicating all library cache latch children were acquired and released in a quasi-uniform distributed(2% times 50), quasi-cyclical fashion! Same pattern repeates if I reduce or increase samples.

Latch greedy session

Initially, I thought, this is a bug. I have not seen a case where all library cache latch children were needed to execute a SQL or a process causing massive latch contention. [ Okay, I confess, I have simulated that kind of issue in my test database holding library cache latch children using oradebug kslgetl stuff, but that is a different topic altogether]. Session 140 was executing a SQL printed below. This SQL is querying execution plan from shared_pool for a specific hash_value in this 9i database( Equivalent to dbms_xplan.display_cursor stuff in 10g.]

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&hv ), 'serial' ) )
/

Turned on sqltrace on that session and tkprof showed that recursive SQL generated from dbms_xplan package as a costly SQL.

REM Removing few lines to improve readability.
SELECT /* EXEC_FROM_DBMS_XPLAN */ id, position, level , operation, options, object_name , cardinality, bytes, temp_space,
        cost, io_cost, cpu_cost ,decode(partition_start, 'ROW LOCATION',
....
    		from BACKUP.DYNAMIC_PLAN_TABLE start  with id = 0
            and  timestamp >= 
		(select max(timestamp) from BACKUP.DYNAMIC_PLAN_TABLE where id=0  and 
			statement_id = '00000009BB9B6D50_0' and nvl(statement_id, ' ') not like 'SYS_LE%')
            and  nvl(statement_id, ' ') not like 'SYS_LE%' and statement_id = '00000009BB9B6D50_0' connect by (prior id = parent_id
                      and prior nvl(statement_id, ' ') =
                                nvl(statement_id, ' ')
                      and prior timestamp <= timestamp)
                  or (prior nvl(object_name, ' ') like 'SYS_LE%'
                      and  prior nvl(object_name, ' ') =
                                 nvl(statement_id, ' ')
                      and id = 0 and prior timestamp <= timestamp)
          order siblings by id
/

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       30     53.79      55.90        423     375837          0          29
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       32     53.82      55.93        423     375837          0          29


X$KQLFXPL

55 seconds to execute this SQL? That seems bit excessive. This SQL is the only time-consuming recursive SQL generated from the select statement. So, somehow, this must be causing latch contention. I will print only relevent lines from the plan output.

....
|   4 |     NESTED LOOPS OUTER           |             |     1 |  4479 |    21   (5)|
|   5 |      NESTED LOOPS OUTER          |             |     1 |  4475 |    20   (5)|
|*  6 |       FIXED TABLE FULL           | X$KQLFXPL   |     1 |  4444 |            | <---- FTS
|   7 |       TABLE ACCESS BY INDEX ROWID| OBJ$        |     1 |    31 |     3  (34)|
|*  8 |        INDEX UNIQUE SCAN         | I_OBJ1      |     1 |       |     2  (50)|
|   9 |      TABLE ACCESS CLUSTER        | USER$       |     1 |     4 |     2  (50)|
|* 10 |       INDEX UNIQUE SCAN          | I_USER#     |     1 |       |            |
....

   6 - filter("P"."INST_ID"=:B1 AND "P"."KQLFXPL_HADD""P"."KQLFXPL_PHAD")

Line #6 is the problem. This table x$kqlfxpl is accessed using Full table scan. Looks like, x$kqlfxpl can NOT be accessed without holding library cache latches. Since there is a FTS on this fixed table, almost every library cache object need to be touched which potentially means all library cache objects must be inspected. Further, accessing this x$ table means library cache buckets (_kgl_bucket_count controlled area) need to be walked, latches acquired and released before that memory area is inspected. Makes sense!

Repeated calls to kqlfxp, kglic, kglic0 also visible in the stack output. These calls suggests that library cache latches were acquired and released. Systemstate dumps confirms that too.

I was also able to reproduce this issue querying x$kqlfxpl in a cloned database.

SQL>  select /*+ full(a) */  count(*) from x$kqlfxpl a;

  COUNT(*)
----------
     48926

-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     1 |       |    18   (6)|
|   1 |  SORT AGGREGATE      |             |     1 |       |            |
|   2 |   FIXED TABLE FULL   | X$KQLFXPL   |   100 |       |            |
-------------------------------------------------------------------------

Running latchprofx script from another session concurrently shows that accessing this x$ table will result in excessive library cache latching activity. Row estimates column for this table shows 100 which is a CBO default cardinality estimate if there are no statistics on a table.


@/tmp/latchprofx  sid,name,laddr % library 10000

   SID NAME                                LADDR                  Held  Held %     Held ms
------ ----------------------------------- ---------------- ---------- ------- -----------
   127 library cache                       0000000990FED2B8         26     .26      68.120
   127 library cache                       0000000990FEFEC8         24     .24      62.880
   127 library cache                       0000000990FEF658         23     .23      60.260
   127 library cache                       0000000990FEE668         22     .22      57.640
   127 library cache                       0000000990FEFA18         22     .22      57.640
...

Background

Now, a little bit background is necessary. What started all this? As a part of 10g upgrade planning, Client DBAs had setup a job to capture costly SQLs and store execution plans from shared_pool using dynamic_plan_table script, borrowed from Tom kyte dynamic_plan_table discussion. Excellent idea and wonderfully written sql, Tom.

If you don’t know already, Tom’s method creates following view. Client script passes the hash_value of costly SQLs from v$sql to this view.

create or replace view dynamic_plan_table
    as
    select
     rawtohex(address) || '_' || child_number statement_id,
     sysdate timestamp, operation, options, object_node,
     object_owner, object_name, 0 object_instance,
     optimizer,  search_columns, id, parent_id, position,
     cost, cardinality, bytes, other_tag, partition_start,
     partition_stop, partition_id, other, distribution,
   cpu_cost, io_cost, temp_space, access_predicates,
   filter_predicates
    from v$sql_plan
;

REM there is a wrapper script calling following SQL, in a loop, for top 20 SQL statements at a stretch. 
REM Of course, there is more logic in that script.

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&hv ), 'serial' ) )
/

Unfortunately, such an awesome method has side effects in version 9.2.0.8, as explained above.

At this point, I am almost sure that we need to collect statistics on fixed tables to resolve this issue. There is a fixed index on x$kqlfxpl (KQLFXPL_HADD, KQLFXPL_PHAD,KQLFXPL_HASH) and if that index is used, effect of this problem should disappear. Since there is a plan to upgrade to 10g soon, client didn’t want to collect fixed statistics.

So, I worked around the issue. Not an elegant solution, but it avoids problem by recreating the view with hardcoded hash value for every loop.

create or replace view backup.dynamic_plan_table
    as
    select
     rawtohex(address) || '_' || child_number statement_id,
     sysdate timestamp, operation, options, object_node,
     object_owner, object_name, 0 object_instance,
     optimizer,  search_columns, id, parent_id, position,
     cost, cardinality, bytes, other_tag, partition_start,
     partition_stop, partition_id, other, distribution,
   cpu_cost, io_cost, temp_space, access_predicates,
   filter_predicates
    from v$sql_plan 
   where hash_value=&&hv    -- Added this line..
;

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&&hv ), 'serial' ) )
/

Now, it is evident that query runs in 0.03 seconds and also used fixed index.

     51     NESTED LOOPS OUTER (cr=261 r=0 w=0 time=1327 us)
     51      NESTED LOOPS OUTER (cr=131 r=0 w=0 time=892 us)
     51       FIXED TABLE FIXED INDEX X$KQLFXPL (ind:3) (cr=0 r=0 w=0 time=496 us)
     27       TABLE ACCESS BY INDEX ROWID OBJ$ (cr=131 r=0 w=0 time=327 us)
     27        INDEX UNIQUE SCAN I_OBJ1 (cr=104 r=0 w=0 time=212 us)(object id 33)
     27      TABLE ACCESS CLUSTER USER$ (cr=130 r=0 w=0 time=359 us)
     27       INDEX UNIQUE SCAN I_USER# (cr=28 r=0 w=0 time=78 us)(object id 11)

Finally

10g upgrade guide recommends collecting fixed table statistics. So, this issue should completely disappear in 10g.

Posted in CBO, Oracle database internals, Performance tuning | Tagged: , , , , , , , , | 11 Comments »

Systemstate dump analysis: Nocache on high intensive sequences in RAC

Posted by Riyaj Shamsudeen on March 10, 2009

Sequence values are cached in instance memory and by default 20 values are cached. As instance cache is transient, loss of an instance can result in loss of cached sequence values. Permanent record of highest possible value from any instance is kept track in SEQ$ table.

SQLs accessing this sequence within an instance will access instance SGA. As each instance caches its own sequence values it is highly likely that SQLs accessing this sequence from different instance will create gaps in sequence values.

Common knee-jerk reaction to this issue is to set nocache or cache 1 for these sequences. In a single instance environment, this approach will backfire due to massive updates to SEQ$ tables, buffer busy waits, latch free waits etc. In a RAC environment, this issue is magnified and almost hangs the instance. I had the privilege of working with a client to resolve one of their performance issues.

Problem

Instances were hung. It was not possible to login to the database. Many existing connections are working fine though. We were lucky enough that one of the DBAs had active connection to the database. So, we took systemstate dump from that session to see whey there is an hang (or slowness).

Alert log was printing following lines at this time frame. So, we know that there is a problem with row cache enqueue.

Wed Feb 18 06:31:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=45
Wed Feb 18 08:59:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=44
Wed Feb 18 08:59:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=43
Wed Feb 18 09:01:00 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=46
Wed Feb 18 09:01:00 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=27

systemstate dump

We took a systemstate dump with following command.

alter session set events 'immediate trace name systemstate level 4';

Continue Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , | 14 Comments »

High global cache waits on tab$ ?

Posted by Riyaj Shamsudeen on February 25, 2009

Yes, you read it correct. That shocked me too.

I was trying to understand global cache waits for a client. Naturally, I queried statspack tables and analyzed the data for just one day using a script. Surprisingly, tab$ came as top consumer of global cache waits. I was shocked by the revelations and couldn’t believe it! If something doesn’t make sense, look more closely, right?

Global cache waits

Database version is 9i. Statspack table stats$seg_stat is the source for this script and that table is populated from v$segment_stats. So, these column values (global_cache_cu_blocks_served and global_cache_cr_blocks_served) are cumulative. To find statistics for a specific period, we need to subtract column value of prior row from current row. Analytic function lag can be useful for this.

In this analytic function printed below, partitioning clause uses instance_number, startup_time and dataobj#. All rows with same value for these three columns will be considered in one data partition and then rows ordered by snap_id within that data partition. Lag will pull the row from prior snap_id in that partition. Then, we subtract from current value to get the difference. Please refer to this paper: Performance tuning with SQL new features – paper for more information about analytic functions.

  ...
   global_cache_cu_blocks_served -
   lag(global_cache_cu_blocks_served,1,0) over (partition by instance_number,startup_time, dataobj#, obj#
              order by  snap_id ) global_cache_cu_blocks_served,
...

Script and output

Complete script printed below and running that against a database.

Continue Reading

Posted in EBS11i, Oracle database internals, Performance tuning | Tagged: , , , , | 6 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 177 other followers