Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

Posts Tagged ‘performance’

Book: Expert Oracle RAC 12c

Posted by Riyaj Shamsudeen on September 8, 2013

A quick note, Expert Oracle RAC book co-written by me is available now: Expert Oracle RAC 12c. I have written about 6 chapters covering the RAC internals that you may want to learn :) I even managed to discuss the network internals in deep, after all, network is one of the most important component of a RAC cluster.

Posted in 12c, Oracle database internals, Performance tuning, RAC | Tagged: , , , | Leave a Comment »

RMOUG 2012 – Hello Denver!

Posted by Riyaj Shamsudeen on January 10, 2012

On February 14-16, I’ll be at the Colorado Convention Center in Denver, Colorado for RMOUG’s Training Days Conference. This is the largest regional Oracle User Conference in North America and attracts presenters from all around the country and the globe. I’ll be presenting:

Presentation Name: Troubleshooting RAC Background Process

Abstract: RAC background process performance is critical to keep the application performance. This session will demo techniques to review the performance of RAC background processes such as LMS, LMD, LMON, etc. using various statistics and UNIX tools. The presentation will also discuss why certain background processes must run in higher priority to maintain the application performance in RAC.

Presentation Name: A Kind and Gentle Introduction to RAC

Abstract: This session will introduce basic concepts such as cache fusion, conversion to RAC, protocols for interconnect, general architectural overview, GES layer locks, clusterware, etc. The session will also discuss the srvctl command and demo a few of these commands to improve the understanding.

Presentation Name: Parallel Execution in RAC

Abstract: This presentation will start to discuss and demo parallel server allocation, intra, and inter node parallelism aspects. The session will discuss the new parallelism features such as parallel statement queuing, parallel auto dop, and discuss the interaction of those features with RAC. The session will probe a few critical parameters to improve PQ performance in RAC.

Click here for more information or to register for RMOUG’s Training Days.

Posted in Oracle database internals, Performance tuning, Presentations, RAC | Tagged: , , | Leave a Comment »

RAC object remastering ( Dynamic remastering )

Posted by Riyaj Shamsudeen on March 25, 2010

In RAC, every data block is mastered by an instance. Mastering a block simply means that master instance keeps track of the state of the block until the next reconfiguration event (due to instance restart or otherwise).

Hash to the master

These data blocks are mastered in block ranges. For example, range of blocks starting from file 10, block 1 through block 128 may be mastered by instance 1, blocks from file 10, block 129 through 256 are mastered by instance 2 etc. Of course, there are differences between various versions 10g, 11g etc, but Idea here is that block ranges are uniformly mastered between various instances so that Global cache grants are evenly distributed among the instances. Interestingly, length of the block range is 128 from 10g onwards (Julian Dyke mentioned that is 1089 in 9i, but I have not personally tested it). Of course, Support recommends you to unset db_file_multiblock_read_count which will be auto adjusted to 128 which means that Full block range can be read with fewer GC messages, I suppose. I digress.

Further, Michael Möller pointed out that this hash-algorithm is further optimized: The hash-algorithm used when initially computing the master node from the DBA, results in a “virtual master”, which is then translated to a real (online&open) master by a lookup table (the length of which is the maximum number of possible nodes (128 ?). This means that when one node goes off/on-line, RAC does NOT have to recalculate the hash for all blocks, but only distribute the new Hash-to-node table. (One can later visualize dynamic remastering as an additional lookup table between the hash value and node. This table also needs redistributing on node changes.)

Following SQL is helpful in showing masters and owners of the block. This SQL joins, x$kjbl with x$le to retrieve resource name. If you are familiar with Oracle locking strategy, you would probably recognize the format of these cache fusion (aka old PCM) locks. Lock type in this case is BL, id1 is block# and id2 is file_id in this case. Column kjblname2 provides a decimal format lock resource.

Please observe the output below:

  1. Block range: File 1, block 6360-6376 is mastered by node 3 and also owned by node 3.
  2. Block range: File 1, blocks upto 10709 is mastered by instance 0 and owned by instance 3.
  3. Next block range from 10770 is mastered by instance 2 and owned by 3.

Also, note that this output is generated from a database with no remastering done yet.

REM In kjblname2 first entry before ',' is block and seond entry file_id*65536 for BL locks

select kj.*, le.le_Addr from (
select kjblname, kjblname2, kjblowner, kjblmaster, kjbllockp,
substr ( kjblname2,  instr(kjblname2,',')+1,   instr(kjblname2,',',1,2)-instr(kjblname2,',',1,1)-1)/65536 fl,
substr ( kjblname2, 1, instr(kjblname2,',')-1) blk
 from x$kjbl
) kj, x$le le
where le.le_kjbl = kj.kjbllockp
 order by le.le_addr
/

KJBLNAME               KJBLNAME2       KJBLOWNER KJBLMASTER   FL    BLK LE_ADDR
---------------------- --------------- ---------- ----------- --- ----- ----------------
[0x18d8][0x10000],[BL] 6360,65536,BL           3          3     1  6360 000000038DF9AB08
...
[0x18e7][0x10000],[BL] 6375,65536,BL           3          3     1  6375 000000038DFBF818 => case #2
[0x18e8][0x10000],[BL] 6376,65536,BL           3          3     1  6376 000000038DFD3BA0
...
[0x29d1][0x10000],[BL] 10705,65536,BL          3          0     1 10705 00000005D6FE9230
...
[0x29d5][0x10000],[BL] 10709,65536,BL          3          0     1 10709 000000038EFBB990
[0x2a12][0x10000],[BL] 10770,65536,BL          3          2     1 10770 000000075FFE3C18
...
[0x2a18][0x10000],[BL] 10776,65536,BL          2          2     1 10776 000000038EFA8488 => case #1 
[0x2a19][0x10000],[BL] 10777,65536,BL          3          2     1 10777 000000038EFB7F90 
[0x2a1a][0x10000],[BL] 10778,65536,BL          1          2     1 10778 000000038EFCC318 

Let’s consider three simple cases of SELECT statement running instance 3:

  1. A session is trying to access the block file 1, block 10776, but that block is mastered by instance 2 and also that block is owned by instance 2 (meaning, it is in instance 2 cache). So, instance 3 will sent a PR (Protected Read) mode BL lock request on that block to instance 2. Ignoring additional complexities, instance 2 will grant PR mode lock to instance 3 and transfer the block to instance 3. Obviously, this involves multiple GC messages, grants and block transfer. Statistics ‘gc remote grants’ gets incremented too.
  2. Let’s consider that session is trying to access another block: file 1, block 6375. That block is mastered by instance 3 and also owned by instance 3. At this point, there is no additional GCS/GES processing is needed and the session pin that buffer and continue the work.
  3. Let’s consider a third case. Session is trying to access file 1 block 6374. That block is not in any buffer cache, but instance 3 is master of the block, so local affinity locks are acquired with minimal GC messages and waits. That block is read from the disk in to the buffer cache

In the case #2 and #3 above, requesting instance also is the master node of a block or block range. In these cases, statistics ‘gc local grants’ is incremented and cheaper local affinity locks on those block ranges are acquired avoiding many Global cache messages.

So far so good, but what if, say instance 1, is reading one object (table, index etc) aggressively, but other instances are not reading that object at all? [ through some sort of application node partitioning or just plain workload]. Does it make sense for the instance accessing that object aggressively request a grant to the remote instance(s) for each OPEN on that object’s blocks? Especially, if the blocks are read in to the buffer cache, but disappears soon from the buffer cache? Wouldn’t that be better if the instance reading that object aggressively is also the master of that object, as in the cases #2 and #3 above?

In addition to that, if the block is supposed to be thrown away from buffer cache (close of BL lock) or if the block needs to be written, then that will involve additional overhead/messaging between the master instance and owner instance since the ownership needs to be communicated back to the master of the block.

Enter Object remastering.

Object Remastering

There are many new features in 10g/11g RAC stack. One of them is Object remastering feature. This feature was implemented in 10gR1 and improved in 10gR2 and further enhanced in 11g. I realize there are parameters in 9i also, but I don’t think it worked as intended.

With object remastering feature, if an object is accessed by an instance aggressively, then that instance will become the master of the object reducing gc remote grants improving performance of the application. In the prior sentence, I used the word “accessed”, but it is a loose term, and the correct term is if the instance is requesting much BL locks on an object, then that object can be remastered. In an ideal world, even if the application is not partitioned, remastering of the objects that were accessed aggressively from one instance will acquire cheaper local instance affinity locks and effective RAC Tax will be minimal.

Well, I said, in an ideal world :-) There are few issues here:

  1. Instance do not remember prior mastership across restarts. This means that instance needs to re-learn the object mastership map after every restart. I can see the complexities of remembering the mastership, but it is possible to implement that.
  2. Remastering is not exactly cheap. Instance GRD is frozen during reconfiguration and in a very busy instances, this can take many seconds leading to instance freeze for several seconds. While 10gR2 introduce parallel reconfiguration (_rcfg_parallel_Replay parameter controls this behavior) using all LMS processes to complete the reconfiguration, still, several seconds of freeze is not exactly acceptable in many environments.
  3. I advice my clients to keep LMS processes to a lower value (3 to 5), at the most, but instance reconfiguration effective parallelism is reduced if we reduce number of LMS processes.
  4. Last, but not the least important point is that, default values of few parameters that trigger remastering events are quite low for busy environments causing frequent remastering of objects. In an E-Business World, minor mismanagement in the manager configuration can lead to a massive reconfiguration issues.

Parameters, views and internals

Few parameters are controlling this behavior, not well documented, my test case results are not very accurate either. But, these parameters are giving us a picture of what is going on internally. These parameters are applicable to 10gR2 and below. For 11g, whole set of different parameters comes in to play and I will blog about the differences in another blog entry.

X$object_affinity_statistics maintains the statistics about objects and OPENs on those objects. It is important to understand the difference between OPEN and Buffer access. If the block is in the cache already in a suitable mode, there is no need for BL opens on that block. So, if the sessions are accessing the same block repeatedly without requesting any additional BL locks, then the count is not incremented. So, OPEN is simply a number of BL request initiated in an ephimeral time frame.

LCK0 process maintains these object affinity statistics. If an instance opens 50 more opens on an object then the other instance (controlled by _gc_affinity_limit parameter), then that object is a candidate for remastering. That object is queued and LMD0 reads the queue and initiates GRD freeze. LMON performs reconfiguration of buffer cache locks working with LMS processes. All these are visible in LMD0/LMON trace files. Parameter _gc_affinity_time controls how often the queue is checked to see if the remastering must be triggered or not with a default value of 10 minutes.

Now, you don’t want just any object as a candidate for remastering, meaning, if instance 1 opened 101 BL locks on that object and instance 2 opened 50 BL locks on that object, you don’t want to trigger object remastering. Only objects with higher amount of BL lock requests must be queued for remastering. Well, that threshold seems to be controlled by another parameter _gc_affinity_minimum: This parameter is defined as “minimum amount of dynamic affinity activity per minute” to be a candidate for remastering. Defaults to 2500 and I think, it is lower in a busy environment.

Few lines from LMD0 trace files showing that LMD0 is reading a request queue:

* kjdrchkdrm: found an RM request in the request queue 
  Transfer pkey 6589904 to node 3
*** 2009-10-12 11:41:20.257

How bad can it get?

Performance can suffer if there are remastering issues. Following AWR report shows that few instances froze due to DRM reconfiguration issue. Same type of freeze is visible in all other nodes too. gc buffer busy is a side effect of DRM freeze (not always, but in this case).

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
gc buffer busy                    1,826,073     152,415     83   62.0    Cluster
CPU time                                         30,192          12.3
enq: TX - index contention           34,332      15,535    453    6.3 Concurrenc
gcs drm freeze in enter server       22,789      11,279    495    4.6      Other
enq: TX - row lock contention        46,926       4,493     96    1.8 Applicatio

At the same time, a storm of DRM were started. This lead to repetitive DRM freeze and instance configuration leading to massive performance issues.

* received DRM start msg from 2 (cnt 5, last 1, rmno 404) 
Rcvd DRM(404) Transfer pkey 1598477 from 3 to 2 oscan 1.1
Rcvd DRM(404) Dissolve pkey 6100030 from 2 oscan 0.1
Rcvd DRM(404) Dissolve pkey 5679943 from 2 oscan 0.1
Rcvd DRM(404) Transfer pkey 6561036 from 0 to 2 oscan 1.1
Rcvd DRM(404) Transfer pkey 5095243 to 2 oscan 0.1
...

A small test case

Let’s walk through a test case that shows DRM in play. Query used index access path to read nearly all blocks from a big index.

Session #1: 
select data_object_id from dba_objects where object_name='WMS_N1';
DATA_OBJECT_ID
-------------
6099472

REM No affinity statistics yet.
select * from x$object_affinity_statistics where object=6099472;
no rows selected.
REM executing a costly select statement
select /*+ index(a WMS_N1 */ count(*) from big_table1 a;

Session #2: I was monitoring the DRM tables:
REM DRM operations completed so far is 409. We will keep track of this count to see the remastering count intereasing. There are few
REM other interesting fields in this view.
select drms from X$KJDRMAFNSTATS;
DRM
----
409
REM  I see that 23442 opens on that index already since the session #1 started running. 
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8         14          1    6099472          1      23442
REM No mastering has kicked in for that object.
select * from v$gcspfmaster_info where object_id=6099472;
no rows selected
REM few seconds later, open count increased from 23442 -> 33344.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8         16          1    6099472          1      33344

REM No remastering has kicked in for that object. 
select * from v$gcspfmaster_info where object_id=6099472;
no rows selected

REM Surprisingly, while session #1 is still executing, the counter for OPENS was zeroed out even when though DRM has not been triggered yet.
REM  OPENS Increased to 1229 from 0 since the session #1 is still executing.
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8          0          1    6099472          1       1229
REM Approximately 10 minutes or so later, Remastering kicked in..
REM # of DRMS increased from 409 to 411
select drms from X$KJDRMAFNSTATS;
DRM
----
411
REM Remastering of that index happened. Now the master is 0 which is instance 1.
select * from v$gcspfmaster_info where object_id=6099472;
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6099472              0           32767            1
REM Opens are still increasing but remastering has already occurred.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05AF78          2          1    6099472          1      42335
FFFFFFFF7C05AF78          3          1    6099472          2          1
REM LMON trace files are also indicating transfer of that pkey. Notice that pkey is same as object_id
*** 2010-03-23 10:41:57.355
Begin DRM(411) - transfer pkey 6099472 to 0 oscan 0.0
 ftd received from node 1 (4/0.30.0)
 all ftds received
REM few seconds later, opens have been reset again.
select * from x$object_affinity_statistics where object=6099472;
ADDR                   INDX    INST_ID     OBJECT       NODE      OPENS
---------------- ---------- ---------- ---------- ---------- ----------
FFFFFFFF7C05BFA8          0          1    6099472          1       7437
REM Still the master is instance 1. 
select * from v$gcspfmaster_info where object_id=6099472;
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6099472              0           32767            1

Essentially, an object was remastered after excessive BL locking requests ( in a loose term accesses) on that index.

undo and affinity

Mastering of Undo segments differ from non-undo segment mastering. With non-undo segments, all the blocks are mastered by a hash technique spreading mastership among instances for a segment. Only after an instance opens BL locks aggressively on a segment that segment is mastered. But, for undo segments, Instance that activates an undo segment masters the segment immediately. This makes sense, since that undo segment will be used by the instance opening the segment in most cases. Parameter _gc_undo_affinity controls whether this dynamic undo remastering is enabled or not.

Since undo segments do not have real object_ids, a dummy object_ids over a value of 4294950912 is used. For example, undo segment 1 (with usn=1) will have an object_id of 4294950913, usn=2 will have object_id of 4294950914 etc.
[ 4294950912 = power(2,32) – power (2,14) = xFFFFC000 ]

select objecT_id, object_id-4294950912 usn, current_master, previous_master, 
remaster_cnt  from v$gcspfmaster_info where object_id>4294950912

 OBJECT_ID        USN CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
4294950913          1              0           32767            1
4294950914          2              0           32767            1
4294950915          3              0           32767            1
4294950916          4              0           32767            1
4294950917          5              0           32767            1
4294950918          6              0           32767            1
4294950919          7              0           32767            1
4294950920          8              0           32767            1
4294950921          9              0           32767            1
4294950922         10              0           32767            1
4294950923         11              1           32767            1
4294950924         12              1           32767            1
4294950925         13              1           32767            1
...

REM Notice that usno 0 is in both instances. That is system undo segment.
REM As you can see below first 10 undo segments are mastered by node 1 is next 3 are mastered by instance 2.

select inst_id, usn, gets from gv$rollstat where usn <=13
order by inst_id, usn
   INST_ID        USN       GETS
---------- ---------- ----------
         1          0       3130
         1          1     108407
         1          2      42640
         1          3      43985
         1          4      41743
         1          5     165166
         1          6      43485
         1          7     109044
         1          8      23982
         1          9      39279
         1         10      48552
         2          0       4433
         2         11     231147
         2         12      99785
         2         13      1883

I was not successful in triggering another undo segment remastering event. I created one active transaction generating 200K undo blocks in one node, another node was reading that table and I can see enormous waits for those undo blocks. But, I didn’t see any remastering events related to that undo segment. Not sure why it did not work, may be the conditions for the undo segment remastering is different.

[ PS: I am able to manually remaster the undo segment using lkdebug command discussed below: So, code must be remastering the undo segments automatically too, but may be some other conditions must be met.

  1* select * from v$gcspfmaster_info where object_id=431+4294950912
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0 4294951343              0           32767            1
Oradebug lkdebug –m pkey 4294951343

* kjdrchkdrm: found an RM request in the request queue
  Transfer pkey 4294951343 to node 1
*** 2010-03-24 12:47:29.011
Begin DRM(520) - transfer pkey 4294951343 to 1 oscan 0.1
 ftd received from node 0 (4/0.31.0)
 all ftds received

select * from v$gcspfmaster_info where object_id=431+4294950912
SQL> /
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0 4294951343              1               0            2
]

I am not preaching that you should modify these undocumented parameters. Far from it. Understand the parameters, if you run in to wait events such as ‘gc remaster’, ‘gcs freeze for instance reconfiguration’, understand whether the default values are quite low. Work with support and see if this can be tuned.

Manual remastering

You can manually remaster an object with oradebug command :
oradebug lkdebug -m pkey

This enqueues an object remaster request. LMD0 and LMON completes this request
*** 2010-01-08 23:25:54.948
* received DRM start msg from 1 (cnt 1, last 1, rmno 191)
Rcvd DRM(191) Transfer pkey 6984154 from 0 to 1 oscan 0.0
 ftd received from node 1 (8/0.30.0)
 ftd received from node 0 (8/0.30.0)
 ftd received from node 3 (8/0.30.0)
Current_master starts from 0. 
 1* select * from v$gcspfmaster_info where object_id=6984154
SQL> /
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6984154              1               0            2
SQL> oradebug lkdebug -m pkey 6984154
Statement processed.
SQL>  select * from v$gcspfmaster_info where object_id=6984154
 /
   FILE_ID  OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- ---------- -------------- --------------- ------------
         0    6984154              2               1            3

Summary

In summary, remastering is a great feature. It is a pity that some times, we fall victims of the side effects. So, if you run in to issues with remastering, don’t disable it, but see if you can tune those parameter upwards so as to control the remastering events. If you stil want to disable DRM completely, I would recommend setting _gc_affinity_limit and _gc_affinity_minimum to much higher value, say 10Million. Setting the parameter _gc_affinity_time to 0 will completely disable DRM, but that also means that you can not manually remaster objects. Further, Arup mentioned that x$object_affinity_statistics is not maintained if DRM is disabled.

Again, these are undocumented parameters. Before you change these parameters make sure that Oracle Support agrees with you.


Update 1:

From 11g onwards, affinity management renamed to policy management. For example, x$object_affinity_statistics is renamed to x$object_policy_statistics. Similarly, initialization parameters are renamed too:Parameter _gc_affinity_limit is renamed to _gc_policy_limit; Parameter _gc_affinity_time is renamed to _gc_policy_time; A new view v$policy_history is introduced and all rows with policy_event = ‘initiate_affinity’ are associated with DRM events.
Other details about the blog still holds good, except that by default _gc_policy_limit is lowered to 1500 which would mean that, by theory, 11g may have more DRM events. YMMV.

 1* select * from  v$policy_history
   INST_ID POLICY_EVENT         DATA_OBJECT_ID TARGET_INSTANCE_NUMBER  EVENT_DATE
---------- -------------------- -------------- ----------------------  --------------------
         2 glru_on                           0                      1  10/15/2010 10:58:28
         2 glru_on                           0                      1  10/15/2010 11:21:32
         2 initiate_affinity             74809                      1  10/15/2010 13:27:44

[ Many Thanks Arup Nanda and Michael Möller (aka “M2″) for reviewing this blog entry, they contributed heavily to some of my discussion. But, any mistakes in this document is solely of mine. ]
This blog is available in a pdf format from Dynamic_remastering_RAC_orainternals.

Posted in Oracle database internals, Performance tuning, Presentations, RAC | Tagged: , , , , , , , , , , , , , , , , , , | 31 Comments »

Library cache lock and library cache pin waits

Posted by Riyaj Shamsudeen on June 2, 2009

I encountered few customer issues centered around library cache lock and library cache pin waits. Library cache lock and pin waits can hang instance, and in few cases, whole clusters of RAC instances can be hung due to library cache lock and pin waits.

Why Library cache locks are needed?

Library cache locks aka parse locks are needed to maintain dependency mechanism between objects and their dependent objects like SQL etc. For example, if an object definition need to be modified or if parse locks are to be broken, then dependent objects objects must be invalidated. This dependency is maintained using library cache locks. For example, if a column is dropped from a table then all SQLs dependent upon the table must be invalidated and reparsed during next access to that object. Library cache locks are designed to implement this tracking mechanism.

In a regular enqueue locking scenarios there is a resource (example TM table level lock) and sessions enqueue to lock that resource. More discussion on enqueue locking can be found in Internal of locks. Similarly, library cache locks uses object handles as resource structures and locks are taken on that resource. If the resources are not available in a compatible mode, then sessions must wait for library cache objects to be available.

Why Library cache pins are needed?

Library cache pins deals with current execution of dependent objects. For example, an underlying objects should not be modified when a session is executing or accessing a dependent object. So, before parse locks on a library cache object can be broken, library cache pins must be acquired in an Exclusive mode and then only library cache objects can be dropped. If a session is executing a stored object, then the library cache pins will not be available and there will be waits for library cache pins. Typically, this happens for a long running statement executing a stored object.

x$kgllk, x$kglpn and x$kglob

Library cache locks and pins are externalized in three x$ tables. x$kgllk is externalizing all locking structures on an object. Entries in x$kglob acts as a resource structure. x$kglpn is externalizing all library cache pins.

x$kglob.kglhdadr acts as a pointer to the resource structure. Presumably, kglhdadr stands KGL handle address. x$kgllk acts as a lock structure and x$kgllk.kgllkhdl points to x$kglob.kglhdadr. Also, x$kglpn acts as a pin stucture and x$kglpn.kglpnhdl points to x$kglob.kglhdadr to pin a resource. To give an analogy between object locking scenarios, x$kglob acts as resource structure and x$kgllk acts as lock structures for library cache locks. For library cache pins, x$kglpn acts as pin structure. x$kglpn also pins that resource using kglpnhdl. This might be clear after reviewing the example below.

Test case

We will create a simple test case to create library cache locks and pin waits

create or replace procedure backup.test_kgllk (l_sleep in boolean , l_compile in boolean)
as
 begin
  if (l_sleep ) then
	sys.dbms_lock.sleep(60);
  elsif (l_compile )  then
  	execute immediate 'alter procedure test_kgllk compile';
  end if;
 end;
/

In this test case above, we create a procedure and it accepts two boolean parameters: sleep and compile. Passing true to first argument will enable the procedure to sleep for a minute and passing true for the second argument will enable the procedure to recompile itself.

Let’s create two sessions in the database and then execute them as below.

Session #1: exec test_kgllk ( true, false); — Sleep for 1 minutes and no compile
Session #2: exec test_kgllk ( false, true); — No sleep,but compile..

At this point both sessions are waiting. Following SQL can be used to print session wait details.

select
 distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,ses.ksuseunm machine,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
   ,pn.kglpncnt pin_cnt, pn.kglpnmod pin_mode, pn.kglpnreq pin_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
   -- lk.kglnaobj, lk.user_name, lk.kgllksnm,
   --,lk.kgllkhdl,lk.kglhdpar
   --,trim(lk.kgllkcnt) lock_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req,
   --,lk.kgllkpns, lk.kgllkpnc,pn.kglpnhdl
 from
  x$kglpn pn,  x$kglob ob,x$ksuse ses 
   , v$session_wait w
where pn.kglpnhdl in
(select kglpnhdl from x$kglpn where kglpnreq >0 )
and ob.kglhdadr = pn.kglpnhdl
and pn.kglpnuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

Output of above SQL is:

                                                                pin  pin  pin                                 wait seconds
  SID   SERIAL# USERNAME     MACHINE   OBJ_OWNER  OBJ_NAME      cnt  mode req  STATE      EVENT               time in_wait
----- --------- ------------ --------- ---------- ------------- ---- ---- ---- ---------- ------------------- ----- -------
  268     12409 SYS          orap      SYS        TEST_KGLLK    3    2    0    WAITING    PL/SQL lock timer       0       7
  313     45572 SYS          orap      SYS        TEST_KGLLK    0    0    3    WAITING    library cache pin       0       3
  313     45572 SYS          orap      SYS        TEST_KGLLK    3    2    0    WAITING    library cache pin       0       3

  1. Session 268 (session #1) is sleeping while holding library cache pin on test_kgllk object (waiting on PL/SQL lock timer more accurately).
  2. Session 313 is holding library cache pin in mode 2 and waiting for library cache pin in mode 3.

Obviously, session 313 is waiting for session 268 to release library cache pins. Since session 268 is executing, session 313 should not be allowed to modify test_kgllk library cache object. That’s exactly why library cache pins are needed.

Adding another session to this mix..

Let’s add one more session as below

 
exec test_kgllk (false, true);

Output of above query is:

 
                                                                                   pin  pin  pin                                            wait seconds
  SID   SERIAL# USERNAME     MACHINE              OBJ_OWNER  OBJ_NAME              cnt mode  req STATE      EVENT                           time in_wait
----- --------- ------------ -------------------- ---------- -------------------- ---- ---- ---- ---------- ------------------------------ ----- -------
  268     12409 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    PL/SQL lock timer                  0      34
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              0    0    3 WAITING    library cache pin                  0      29
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    library cache pin                  0      29
  442      4142 SYS          oraperf              SYS        TEST_KGLLK              0    0    2 WAITING    library cache pin                  0       3

Well, no surprise there. New session 442 also waiting for library cache pin. But, notice the request mode for session 442. It is 2. Session 442 needs that library cache pin in share mode to start execution. But 313 has already requested that library cache pin in mode 3. A queue is building up here. Many processes can queue behind session 313 at this point leading to an hung instance.

library cache locks..

Let’s execute same package but both with same parameters.

 Session #1: exec test_kgllk(false, true);
 Session #2: exec test_kgllk(false, true);

Rerunning above query tells us that session 313 is waiting for the self. Eventually, this will lead library cache pin self deadlock.


Library cache pin holders/waiters
---------------------------------
                                                                                   pin  pin  pin                                            wait seconds
  SID   SERIAL# USERNAME     MACHINE              OBJ_OWNER  OBJ_NAME              cnt mode  req STATE      EVENT                           time in_wait
----- --------- ------------ -------------------- ---------- -------------------- ---- ---- ---- ---------- ------------------------------ ----- -------
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              0    0    3 WAITING    library cache pin                  0      26
  313     45572 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    library cache pin                  0      26

Wait, what happened to session #2? It is not visible in x$kglpn. Querying v$session_wait shows that Session #2 is waiting for library cache lock. We will run yet another query against x$kgllk to see library cache lock waits.

 
  Querying x$kgllk with the query below: 
select
 distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,KSUSEMNM module,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
   ,lk.kgllkcnt lck_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
 from
  x$kgllk lk,  x$kglob ob,x$ksuse ses
  , v$session_wait w
where lk.kgllkhdl in
(select kgllkhdl from x$kgllk where kgllkreq >0 )
and ob.kglhdadr = lk.kgllkhdl
and lk.kgllkuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

Library cache lock holders/waiters
---------------------------------
                                                                                   lock lock                                            wait seconds
  SID   SERIAL# USERNAME     MODULE     OBJ_OWNER  OBJ_NAME                LCK_CNT mode  req STATE      EVENT                           time in_wait
----- --------- ------------ ---------- ---------- -------------------- ---------- ---- ---- ---------- ------------------------------ ----- -------
  313     45572 SYS          wsqfinc1a  SYS        TEST_KGLLK                    1    1    0 WAITING    library cache pin                  0      29
  313     45572 SYS          wsqfinc1a  SYS        TEST_KGLLK                    1    3    0 WAITING    library cache pin                  0      29
  268     12409 SYS          wsqfinc1a  SYS        TEST_KGLLK                    0    0    2 WAITING    library cache lock                 0      12
  268     12409 SYS          wsqfinc1a  SYS        TEST_KGLLK                    1    1    0 WAITING    library cache lock                 0      12

Session 313 is holding library cache lock on that object in mode 3 and session 268 is requesting lock on that library cache object in mode 2. So, session 268 is waiting for library cache lock while session 313 is waiting for library cache pin (self ). Again, point here is that session 268 is trying to access library cache object and need to acquire library cache lock in correct mode. That library cache lock is not available leading to a wait.

Complete script can be downloaded from my script archive.

RAC, library cache locks and pins

Things are different in RAC. Library cache locks and pins are global resources controlled by GES layer. So, these scripts might not work if these library cache lock and pin waits are global events. Let’s look at what happens in a RAC environment

exec test_kgllk ( false, true); — node 1
exec test_kgllk ( false, true); — node 2

In node1, only one session is visible.

Library cache pin holders/waiters
----------------------------------
                                                                                   pin  pin  pin                                            wait seconds
  SID   SERIAL# USERNAME     MACHINE              OBJ_OWNER  OBJ_NAME              cnt mode  req STATE      EVENT                           time in_wait
----- --------- ------------ -------------------- ---------- -------------------- ---- ---- ---- ---------- ------------------------------ ----- -------
  268     12409 SYS          oraperf              SYS        TEST_KGLLK              0    0    3 WAITING    library cache pin                  0      18
  268     12409 SYS          oraperf              SYS        TEST_KGLLK              3    2    0 WAITING    library cache pin                  0      18

 In node 2, only requestor of the lock is visible. 

lock lock wait seconds SID SERIAL# USERNAME MODULE OBJ_OWNER OBJ_NAME LCK_CNT mode req STATE EVENT time in_wait ----- --------- ------------ ---------- ---------- -------------------- ---------- ---- ---- ---------- ------------------------------ ----- ------- 377 43558 SYS wsqfinc2a SYS TEST_KGLLK 0 0 2 WAITING library cache lock 0 86

Essentially, this script does not work in a RAC environment since it accesses x$ tables directly, which are local to an instance. To understand the issue in a RAC environment we need to access gv$ views, based on x$kgllk, x$kglpn etc. But, I don’t see gv$ views over these x$ tables. We are out of luck there unless we do some more coding.

Nevertheless, we can see lockers and waiters accessing gv$ges_blocking_enqneue to understand locking in RAC.

  1  select inst_id, handle, grant_level, request_level, resource_name1, resource_name2, pid , transaction_id0, transaction_id1
  2* ,owner_node, blocked, blocker, state from gv$ges_blocking_enqueue
SQL> /

   INST_ID HANDLE           GRANT_LEV REQUEST_L RESOURCE_NAME1                 RESOURCE_NAME2                        PID
---------- ---------------- --------- --------- ------------------------------ ------------------------------ ----------
TRANSACTION_ID0 TRANSACTION_ID1 OWNER_NODE    BLOCKED    BLOCKER
--------------- --------------- ---------- ---------- ----------
STATE
----------------------------------------------------------------
         2 00000008DD779258 KJUSERNL  KJUSERPR  [0x45993b44][0x3a1b9eee],[LB]  1167670084,974888686,LB              8700
              0               0          1          1          0
OPENING

         1 00000008E8123878 KJUSEREX  KJUSEREX  [0x45993b44][0x3a1b9eee],[LB]  1167670084,974888686,LB             12741
              0               0          0          0          1
GRANTED


We can see that PID 12741 from instance 1 is holding a library cache global lock [LB]. Global resource in this case is [0x45993b44][0x3a1b9eee],[LB] which uniquely identifies a library cache object at the cluster level. Grant_level is KJUSEREX or Exclusive level and request_level from node 2 is KJUSERPR which is Protected Read level. PID 8700 in node 2 is waiting for library cache lock held by PID 12741 in node1. Using this output and our script output, we can pin point which process is holding library cache lock or pin. While Library cache locks are globalized as global locks in the range of [LA] – [LZ], Library cache pins are also globalized as lock types in the range [NA]-[NZ].

This blog can be read in a document format from
Library_cache_locks_and_library_cache_pin_waits
Update #1: Updated broken links.
Update #2: Updated verbatim after a reader comment.

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , | 34 Comments »

DOUG presentation: Why does optimizer hate my SQL?

Posted by Riyaj Shamsudeen on April 18, 2009

I presented about Cost based optimizer explaining why some times CBO chose inefficient access plan, even though, there is an efficient plan in the search space. This entry is to post presentation slides and they can be downloaded from Why_optimizer_hates_my_sql

.

Update: Updated presentation after Greg’s comments.
Update2: Further bug fixes in presentation and script.
Update3: Updates after Randolf’s comments.

Posted in 11g, CBO, Performance tuning, Presentations | Tagged: , , , , , | 8 Comments »

Lock table followed by DDL?

Posted by Riyaj Shamsudeen on January 8, 2009

My client DBA wrote a small script: To drop a column to a busy table. This application is mostly an OLTP application and table is modified heavily. It is hard to get a lock on the table, even though DDL to drop the column will be quick completing within sub-seconds. Following script was tried in the middle of night so as to minimize production impact.

set serveroutput on size 100000
declare
begin
      -- lock table in exclusive mode 
      execute immediate 'lock table t1 in exclusive mode';
      dbms_output.put_line ('**** Table locked. Dropping column ****' ) ;
      -- If we lock it, drop the column
      execute immediate 'alter table t1 drop column n5 ';
exception
  when others then
     dbms_output.put_line(' No luck ');
     dbms_output.put_line(' SQL code'||sqlcode||','|| 'error '||sqlerrm );
end;
/

Logic of the above script is a) wait for 5 minutes to lock the table in exclusive mode, before timing out. b) If we acquire exclusive lock on that table, then add a column.

Do you see any issues with this script?

So, did we add that column ?
Keep Reading

Posted in 11g, Performance tuning | Tagged: , , , , , , | 15 Comments »

Cardinality feedback to resolve a Cache buffers chains latch contention issue

Posted by Riyaj Shamsudeen on January 2, 2009

Earlier, I blogged about resolving cache buffers chains latch contention in my earlier entry , in which, root cause was excessive index access due to Nested Loops join. Recently, we resolved another similar issue.

Problem

CPU usage was very high in production database server in user% mode. Dynamic performance view v$session_wait indicated excessive waits for latch contention. Output from a script wait_details.sql shows that many sessions were waiting for ‘latch free’ event. Also, address for these latch children are the same, meaning all these sessions are trying to access one latch children.


SQL> @wait_details

   SID PID     EVENT         USERNAME  STATE               WAIT_TIME   WIS P1_P2_P3_TEXT
------ ------- ------------- --------- ---------- ------------------- --------- ----- -----------------------------------
    91  24242  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   101   4884  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   116  23899  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   187  19499  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   108  23498  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 3
   194  23701  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   202  26254  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 4
   220  23274  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   227  23643  latch free    CSTMOP    WAITED KNOWN TIME           2     0 address 69476807024-number 98-tries 0
   331  26519  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   297  23934  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 3
....

We can identify SQL causing latch contention querying v$session_wait. From the output below, SQL with hash_value 1509082258 is suspicious since there are many sessions executing that SQL and waiting / waited recently for ‘latch free’ event.


select substr(w.event, 1, 28 ) event, sql_hash_value, count(*)
from v$session_wait w, v$session s, v$process p
where s.sid=w.sid
and p.addr = s.paddr
and s.username is not null
and event not like '%pipe%'
and event not like 'SQL*%'
group by substr(w.event, 1, 28), sql_hash_value
;

EVENT                          SQL_HASH_VALUE   COUNT(*)
------------------------------ -------------- ----------
enqueue                               3740270          1
enqueue                             747790152          1
enqueue                            1192921796          1
latch free                          622474477          3
latch free                         1509082258         58 <---
latch free                         1807800540          1
global cache null to x                3740270          1
global cache null to x             1473456670          1
global cache null to x             3094935671          1
db file sequential read             109444956          1

Mapping to object_name
Keep Reading

Posted in CBO, EBS11i, Performance tuning | Tagged: , , , , , , | 4 Comments »

Log file synch tuning #2

Posted by Riyaj Shamsudeen on November 24, 2008

After reading my earlier blog about log file sync tuning a reader sent an interesting issue, worth blogging about it.

Problem

Excessive wait time for ‘log file sync’ event while wait time for ‘log file parallel write’ is minimal. See statspack top events below

 

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                                     % Total
Event                                               Waits    Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time                                                       18,435    31.06
PL/SQL lock timer                                   4,370      12,137    20.45
log file sync                                      57,939       8,565    14.43  <-----
db file sequential read                         1,303,477       7,030    11.85
db file scattered read                            505,147       3,992     6.73

                                                                   Avg
                                                     Total Wait   wait    Waits
Event                               Waits   Timeouts   Time (s)   (ms)     /txn
---------------------------- ------------ ---------- ---------- ------ --------
...
log file sync                      57,939      6,571      8,565    148      0.5
...
log file parallel write           215,112          0      1,138      5      1.9
...

Average wait time for ‘log file sync’ event is 148ms and it is 5ms for ‘log file parallel write’ event. There is an oddity here. Wait time for ‘log file sync’ event seems bit excessive compared to ‘log file parallel write’ event. Ratio between number of waits for ‘log file sync’ and ‘log file parallel write’ is approximately 4. In an average case scenario, if each commit resulted in 4 log writes, then I would expect an Average wait time of 20ms for ‘log file sync’ event. Wait time of 148ms? Needs further analysis.

pstack, pmap and more

Database is running in a Solaris 10 server and database version 9.2.0.8. Since this is a Solaris operating environment, we can use proc utilities to debug this further. We used a script to dump pstack and pmap of LGWR in a loop. This wasn’t much help, but we could guess that LGWR was waiting for I/O since there were many lines in the stack with kaio and aio wait calls. Few lines from pstack output printed below.

6924:	ora_lgwr_ERPPRD2
-----------------  lwp# 1 / thread# 1  --------------------
 ffffffff7ddd4838 semsys   (4, 3, ffffffff7fffd5cc, 1, ffffffff7fffd5b8)
 00000001005b2694 ksliwat (0, 3c0, 77e8a7790, 1037c0b30, 3, 0) + 274
..
-----------------  lwp# 2 / thread# 2  --------------------
 ffffffff7ddd3a34 kaio     (6, 0, ffffffff7def7f80, 10, ffffffff7def9c18, ffffffff7f700a00)
-----------------  lwp# 3 / thread# 3  --------------------
 ffffffff7ddd2f1c lwp_park (0, 0, 0)
 ffffffff7ddcc748 cond_wait_queue (1038e5a70, 1038e5a80, 0, 0, ffffffff7def9d08, 0) + 28
 ffffffff7ddccca8 cond_wait (1038e5a70, 1038e5a80, 0, 0, 1c00, 0) + 10
 ffffffff7db03a60 _aio_idle (1038e5a00, 1, 1038e5a80, 0, ffffffff7def9c18, 0) + 28
 ffffffff7db034fc _aio_send_sigev (1038e5a00, 0, 104b60, ffffffff7ddd2cc8, ffffffff7db03498, ffffffff7dc08000) + 64

dtruss

I/O statistics did not provide much clarity either. We need to find if LGWR is suffering from a performance issue. To see if LGWR is suffering from any OS related issues, we need to trace system calls from LGWR and Truss utility provides such a facility. Suffering from truss related paranoia, we didn’t want to run truss against LGWR since that can affect performance, more importantly database stability.

Fortunately, dtruss came handy. dtruss is based upon dtrace utility and by design, dtrace is safe in Solaris Sparc platform.

Keep Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , | 14 Comments »

Performance issue: High Kernel mode CPU usage

Posted by Riyaj Shamsudeen on October 31, 2008

Recently, we resolved a performance issue with one of our esteemed clients. Problem was interesting and worth blogging about it.

Problem
An application makes 300 static connections to database DB1 in database server, say CLNTDB server. Application relies on database link and over a period of time, each session executes SQL through a database link, creating a connection in central database PROD1. So, there are 300 connections in PROD1 database coming from DB1 database through database links. Performance is fine during normal operation.

Problem starts when the application is shutdown. Shutting down the application in DB1, creates massive CPU consumption in PROD1DB server. Unfortunately, this spike in CPU usage lasts for 5-10 seconds and causes ASM heartbeat to fail. Considering that PROD1 is a central database and this has a global effect on application(s) functionality. See presentation below for graphical representation of this problem.

Symptoms
Looking at symptoms in detail, it is visible that CPU usage in sys mode. It is not uncommon to see high CPU usage during a storm of disconnects. But, this specific CPU usage is much higher and uses all CPUs in kernel mode. If this is a problem due to process destruction, then this will show up in CLNTDB server too as there are 300 disconnects there too. But, this problem manifests only in PROD1DB server.

mpstat output from PROD1DB server shows that CPU usage in %sys mode. Notice the numbers below under sys column. almost all CPUs are used in sys mode. We need to drill down further to understand why this much CPUs are used in sys mode. It is also visible that no other columns have any abnormal higher values.


CPU ..intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    554  237  651  219   87  491    0  4349    9  91   0   0
  1    911    0 2412  591  353  630    0 15210   30  63   0   7
  2    313    0  613  106  190  105    0  3562    8  90   0   2
  3    255    0  492   92  161  530    0  2914    6  92   0   2
  4     86    1    2    3    1   63    0     8    0 100   0   0
  5    662    0 1269  153  326  211    0  6753   13  77   0  10
  6    349    0  589   54  170 1534    0  3002    7  88   0   5
  7    317   81  411  105   51   61    0  2484    4  93   0   3
  8   6423 6321  697   36  148  546    0  3663    6  86   0   9
  9    363    0  639   41  154 1893    0  3214    9  85   0   6
 10    456    0  964  194  193   77    0  5797   10  81   0   8
 11    104    0   42    3    9  183    0   207    0  95   0   5
 12    195    0  279  110   31   80    0  1590    3  97   0   0
 13    449    0  844  117  158  127    0  4486    7  85   0   8

Test case
Let’s analyze this issue for a single connection to simplify the problem. Created a test user in DB1 database, created a private database link from DB1 to PROD1 and then executed a select statement over that database link.

In db1:

create user test1 identified by test1;
grant connect, resource to test1;
conn test1/test1
create dtabase ink proddb
connect to test1 identified by test1
using 'proddb';

select * from dual@proddb;

At this point, a database link connection has been created in PROD1 database. Querying v$session in PROD1 database to find session created for database link. SID 4306 is the session created for our test connection from db1 database.

select sid, serial#, LOGON_TIME,LAST_CALL_ET from v$session where
  logon_time > sysdate-(1/24)*(1/60) and machine='machine_name_here'
/
       SID    SERIAL# LOGON_TIME           LAST_CALL_ET
---------- ---------- -------------------- ------------
      4306      51273 12-SEP-2008 20:47:29            1

Associating this session to v$process, found UNIX process for this session. Now, we will start truss on this UNIX process( Solaris platform).

truss -p -d -o /tmp/truss.log

In DB1 executed: exit

This will disconnect session in PROD1 database.

Truss output
Truss output contains activity generated for disconnect in PROD1 server. Reading through truss output, we can see that system call (shmdt) consumed CPU time. Each shmdt call consumes approximately 24ms from output below (18.5053-18.4807=0.0242). We are using -d flag as above to print time spent in that call. Of course, this is a system call and so this CPU usage will be in kernel mode.


18.4630 close(10)                                       = 0
18.4807 shmdt(0x380000000)                              = 0
18.5053 shmdt(0x440000000)                              = 0
18.5295 shmdt(0x640000000)                              = 0
18.5541 shmdt(0x840000000)                              = 0
18.5784 shmdt(0xA40000000)                              = 0
18.6026 shmdt(0xC40000000)                              = 0
18.6273 shmdt(0xE40000000)                              = 0
18.6512 shmdt(0x1040000000)                             = 0
18.6752 shmdt(0x1240000000)                             = 0
18.6753 shmdt(0x1440000000)                             = 0

So, one disconnect executes 10 system calls and consumes approximately 0.24 CPU seconds in kernel mode. Shmdt calls are used to detach from shared memory segment. Since there are 10 shared memory segments ( as visible in ipcs -ma), 10 shmdt calls are executed per session disconnect.

Projecting this calculation for 300 connections, CPU consumption will be for approximately 72 seconds in total. With 16 CPUs, at least, 5-6 seconds will be used in kernel mode, assuming linear projections[ But, in practice, due to mutex calls and such this may not be linear and will be over 72 seconds]. This is matching with our observation: 5-10 seconds of CPU consumption in kernel mode. First thing, we need to do is reduce # of shmdt calls. One way is to reduce # of shared memory segments.

Shared memory segments
We thought that SHMMAX kernel parameter is limiting since SGA size was bigger than SHMMAX size. After changing SHMMAX parameter, restarting server and such, still many shared memory segments were created.

That’s interesting! TRUSSing instance startup to understand why multiple shared memory segments are created.

Few lines from truss output shows calls to system calls _lgrpsys and pset_bind.


4.5957 munmap(0xFFFFFD7FFDAE0000, 32768)               = 0
4.5958 lgrp_version(1, )                               = 1
4.5958 _lgrpsys(1, 0, )                                = 42
4.5958 _lgrpsys(3, 0x00000000, 0x00000000)             = 19108
4.5959 _lgrpsys(3, 0x00004AA4, 0x06399D60)             = 19108
4.5959 _lgrpsys(1, 0, )                                = 42
4.5960 pset_bind(PS_QUERY, P_LWPID, 4294967295, 0xFFFFFD7FFFDFB11C) = 0
4.5960 pset_info(PS_MYID, 0x00000000, 0xFFFFFD7FFFDFB0D4, 0x00000000) = 0
4.5961 pset_info(PS_MYID, 0x00000000, 0xFFFFFD7FFFDFB0D4, 0x061AA2B0) = 0

_lgrpsys calls indicates that there is some form of NUMA activity going on here. pset_bind is used to bind a thread or process to a specific processor set.

Enter NUMA.

NUMA or Locality groups

Keep Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , | 13 Comments »

Import performance: Does import of date column resorts to single row inserts, like lob columns ?

Posted by Riyaj Shamsudeen on August 3, 2008

We had a discussion about import performance in this otn forum . During the discussion, OP raised a doubt that import will resort to single row inserts for tables with date columns. Buffer parameter specifies, essentially, size of this array for array inserts.

We know that if a table has lob columns, then import parameter buffer is not honored and import utility resorts to single row inserts for those tables. But, claim here is, tables with date columns also suffers from single row inserts. We will probe this further and validate that claim in this blog.

Let’s create a table and populate 300K rows.

 create table t1 (n1 number, v1 varchar2 (512), d1 date);
   
 insert into t1
 select n1, lpad(n1, 500, 'x'), sysdate
 from (select level n1 from dual connect by level <=300003);
 commit;

 REM Creating an  export file..
 host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1

Above code fragment created a table, inserted 300,000 rows and exported that table to an export dump file. This dump file is ready to be imported. But, we need to trace the import to measure the effect of buffer parameter. Problem is that how to we trace import session alone, without generating every session in the database? This can be achieved by creating a logon trigger as below. Only sessions from a test user will have trace enabled from this trigger (username is cbqt).


REM I could potentially , use "on schema clause too, but this is part of a generic code that I use.
REM Riyaj Shamsudeen - To trace a session through logon trigger
create or replace trigger
set_system_event
after logon  on database
declare
v_user dba_users.username%TYPE:=user;
sql_stmt1 varchar2(256) :='alter session set events '||chr(39)||'10046 trace name context forever, level 12'||chr(39);
begin
  if (v_user = 'CBQT') THEN
      execute immediate sql_stmt1;
  end if;
end;
/

Let’s drop the table, import with a default buffer size of 64K. Through logon trigger a new sql trace file will be generated. That trace file will be analyzed with tkprof utility as shown in the code fragment below:

drop table t1;

imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log commt=Y full=Y

tkprof orcl11g_ora_3840.trc orcl11g_ora_3840.trc.out sort=execpu, fchcpu

From the tkprof output file generated, pertinent lines are printed below. Insert statement was executed 5455 times which works out to be an average array size of 157 rows.

 

SQL ID : c9nv9yq6w2ydp
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") 
VALUES
 (:1, :2, :3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   5455     15.06      20.10        108      43261     212184      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     5456     15.06      20.10        108      43261     212184      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=7 pr=0 pw=0 time=0 us)

Let’s repeat this test case for a buffer size of 1MB.

sqlplus cbqt/cbqt <<EOF
drop table t1;
EOF

imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=1048576 commt=Y full=Y 

tkprof orcl11g_ora_3846.trc orcl11g_ora_3846.trc.out sort=execpu, fchcpu

Trace lines from the tkprof output file for 1MB test case shown below:

Keep Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , | 1 Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 219 other followers