Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

Posts Tagged ‘oracle performance’

Dynamic_plan_table, x$kqlfxpl and extreme library cache latch contention

Posted by Riyaj Shamsudeen on March 13, 2009

We had an interesting latch contention issue with a client database worth sharing. Client complained that they are seeing increased library cache latch contention in production with no logical explanation (yet). So, client wanted us to find the root cause. This performance issue seems to have started recently and occurs intermittently and also resolves by itself. Client is planning to upgrade to 10g in 4 weeks time frame and this issue need to be resolved to avoid any delays in 10g upgrade.

From statspack analysis and various statistics analysis, it is visible that there is an increase in library cache latch contention, but nothing jumps up as a problem. Of course, this is a very active application [ 5 node RAC application] with very high number of literal SQLs but that is nothing new, it is always been this way. No new dynamic SQLs, no DDLs invalidating cursors or anything of that sort. I started watching production instance, in real time, waiting for the re-occurrence of this issue. It occurred!

latchprofx for rescue

At this very juncture, instance was suffering from latch contention and It was visible that sessions were starting to queue up for latches.

I was ready with necessary tools. Tanel Poder has written a great tool latchprofx to sample v$latch_holder with high frequency [ Actually, it uses x$ tables but you get the idea ]. In this case, high frequency sampling is a must since this performance problem disappears in few minutes or so. Executed latcprofx script for library cache latches and 10000 samples as printed below.

SQL>  @/tmp/latchprofx  sid,name % library 10000

-- LatchProfX 1.06 by Tanel Poder ( http://www.tanelpoder.com )

  SID NAME                                      Held  Held %     Held ms
------ ----------------------------------- ---------- ------- -----------
   140 library cache                             9555   95.55   25167.870
   185 library cache                              198    1.98     521.532
   245 library cache                               97     .97     255.498
   185 library cache pin                           69     .69     181.746
   245 library cache pin                           39     .39     102.726
   240 library cache                               33     .33      86.922
   240 library cache pin                            6     .06      15.804
   227 library cache                                2     .02       5.268
   227 library cache pin                            1     .01       2.634

Yikes. We can see that one session holding library cache latch in 95.5% of samples. That seems bit odd.

Of course, there are many latch children for library cache latches. Is that session repeatedly acquiring just one latch children? Or is it acquiring many different latch children? More importantly what is that session executing ?

Latchprofx to the rescue again. Latchprofx can also print latch children address. Re-executing latcprofx script with modified second argument to the call. Only difference between these two calls is that first argument to latchprofx includes laddr now.


@/tmp/latchprofx  sid,name,laddr % library 10000

   SID NAME                                LADDR                  Held  Held %     Held ms
------ ----------------------------------- ---------------- ---------- ------- -----------
   140 library cache                       0000000990FEFA18        298    2.98     783.442
   140 library cache                       0000000990FED2B8        270    2.70     709.830
   140 library cache                       0000000990FEF298        254    2.54     667.766
   140 library cache                       0000000990FEF658        254    2.54     667.766
   140 library cache                       0000000990FEECF8        240    2.40     630.960
   140 library cache                       0000000990FF0198        236    2.36     620.444
   140 library cache                       0000000990FEE488        225    2.25     591.525
   140 library cache                       0000000990FEFCE8        223    2.23     586.267
   140 library cache                       0000000990FEEED8        222    2.22     583.638
   140 library cache                       0000000990FEF388        222    2.22     583.638
   140 library cache                       0000000990FEDEE8        220    2.20     578.380
   140 library cache                       0000000990FF00A8        216    2.16     567.864
   140 library cache                       0000000990FEE758        215    2.15     565.235
   140 library cache                       0000000990FEEFC8        212    2.12     557.348
   140 library cache                       0000000990FEE938        211    2.11     554.719
   140 library cache                       0000000990FEE668        205    2.05     538.945
   140 library cache                       0000000990FEDB28        201    2.01     528.429
....

Whoa! This session was acquring nearly all latch children. Notice that latch addresses are different in every row output. Meaning, various samples of latchprofx saw that this session is holding a different latch children. In a nutshell, this session was acquiring and releasing ALL library cache latches ( aka children) in quasi sequential fashion and causing massive performance issue. There are 53 latch children for library cache latch (in this database). Column held% is approximately 2%, indicating all library cache latch children were acquired and released in a quasi-uniform distributed(2% times 50), quasi-cyclical fashion! Same pattern repeates if I reduce or increase samples.

Latch greedy session

Initially, I thought, this is a bug. I have not seen a case where all library cache latch children were needed to execute a SQL or a process causing massive latch contention. [ Okay, I confess, I have simulated that kind of issue in my test database holding library cache latch children using oradebug kslgetl stuff, but that is a different topic altogether]. Session 140 was executing a SQL printed below. This SQL is querying execution plan from shared_pool for a specific hash_value in this 9i database( Equivalent to dbms_xplan.display_cursor stuff in 10g.]

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&hv ), 'serial' ) )
/

Turned on sqltrace on that session and tkprof showed that recursive SQL generated from dbms_xplan package as a costly SQL.

REM Removing few lines to improve readability.
SELECT /* EXEC_FROM_DBMS_XPLAN */ id, position, level , operation, options, object_name , cardinality, bytes, temp_space,
        cost, io_cost, cpu_cost ,decode(partition_start, 'ROW LOCATION',
....
    		from BACKUP.DYNAMIC_PLAN_TABLE start  with id = 0
            and  timestamp >= 
		(select max(timestamp) from BACKUP.DYNAMIC_PLAN_TABLE where id=0  and 
			statement_id = '00000009BB9B6D50_0' and nvl(statement_id, ' ') not like 'SYS_LE%')
            and  nvl(statement_id, ' ') not like 'SYS_LE%' and statement_id = '00000009BB9B6D50_0' connect by (prior id = parent_id
                      and prior nvl(statement_id, ' ') =
                                nvl(statement_id, ' ')
                      and prior timestamp <= timestamp)
                  or (prior nvl(object_name, ' ') like 'SYS_LE%'
                      and  prior nvl(object_name, ' ') =
                                 nvl(statement_id, ' ')
                      and id = 0 and prior timestamp <= timestamp)
          order siblings by id
/

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       30     53.79      55.90        423     375837          0          29
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       32     53.82      55.93        423     375837          0          29


X$KQLFXPL

55 seconds to execute this SQL? That seems bit excessive. This SQL is the only time-consuming recursive SQL generated from the select statement. So, somehow, this must be causing latch contention. I will print only relevent lines from the plan output.

....
|   4 |     NESTED LOOPS OUTER           |             |     1 |  4479 |    21   (5)|
|   5 |      NESTED LOOPS OUTER          |             |     1 |  4475 |    20   (5)|
|*  6 |       FIXED TABLE FULL           | X$KQLFXPL   |     1 |  4444 |            | <---- FTS
|   7 |       TABLE ACCESS BY INDEX ROWID| OBJ$        |     1 |    31 |     3  (34)|
|*  8 |        INDEX UNIQUE SCAN         | I_OBJ1      |     1 |       |     2  (50)|
|   9 |      TABLE ACCESS CLUSTER        | USER$       |     1 |     4 |     2  (50)|
|* 10 |       INDEX UNIQUE SCAN          | I_USER#     |     1 |       |            |
....

   6 - filter("P"."INST_ID"=:B1 AND "P"."KQLFXPL_HADD""P"."KQLFXPL_PHAD")

Line #6 is the problem. This table x$kqlfxpl is accessed using Full table scan. Looks like, x$kqlfxpl can NOT be accessed without holding library cache latches. Since there is a FTS on this fixed table, almost every library cache object need to be touched which potentially means all library cache objects must be inspected. Further, accessing this x$ table means library cache buckets (_kgl_bucket_count controlled area) need to be walked, latches acquired and released before that memory area is inspected. Makes sense!

Repeated calls to kqlfxp, kglic, kglic0 also visible in the stack output. These calls suggests that library cache latches were acquired and released. Systemstate dumps confirms that too.

I was also able to reproduce this issue querying x$kqlfxpl in a cloned database.

SQL>  select /*+ full(a) */  count(*) from x$kqlfxpl a;

  COUNT(*)
----------
     48926

-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     1 |       |    18   (6)|
|   1 |  SORT AGGREGATE      |             |     1 |       |            |
|   2 |   FIXED TABLE FULL   | X$KQLFXPL   |   100 |       |            |
-------------------------------------------------------------------------

Running latchprofx script from another session concurrently shows that accessing this x$ table will result in excessive library cache latching activity. Row estimates column for this table shows 100 which is a CBO default cardinality estimate if there are no statistics on a table.


@/tmp/latchprofx  sid,name,laddr % library 10000

   SID NAME                                LADDR                  Held  Held %     Held ms
------ ----------------------------------- ---------------- ---------- ------- -----------
   127 library cache                       0000000990FED2B8         26     .26      68.120
   127 library cache                       0000000990FEFEC8         24     .24      62.880
   127 library cache                       0000000990FEF658         23     .23      60.260
   127 library cache                       0000000990FEE668         22     .22      57.640
   127 library cache                       0000000990FEFA18         22     .22      57.640
...

Background

Now, a little bit background is necessary. What started all this? As a part of 10g upgrade planning, Client DBAs had setup a job to capture costly SQLs and store execution plans from shared_pool using dynamic_plan_table script, borrowed from Tom kyte dynamic_plan_table discussion. Excellent idea and wonderfully written sql, Tom.

If you don’t know already, Tom’s method creates following view. Client script passes the hash_value of costly SQLs from v$sql to this view.

create or replace view dynamic_plan_table
    as
    select
     rawtohex(address) || '_' || child_number statement_id,
     sysdate timestamp, operation, options, object_node,
     object_owner, object_name, 0 object_instance,
     optimizer,  search_columns, id, parent_id, position,
     cost, cardinality, bytes, other_tag, partition_start,
     partition_stop, partition_id, other, distribution,
   cpu_cost, io_cost, temp_space, access_predicates,
   filter_predicates
    from v$sql_plan
;

REM there is a wrapper script calling following SQL, in a loop, for top 20 SQL statements at a stretch. 
REM Of course, there is more logic in that script.

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&hv ), 'serial' ) )
/

Unfortunately, such an awesome method has side effects in version 9.2.0.8, as explained above.

At this point, I am almost sure that we need to collect statistics on fixed tables to resolve this issue. There is a fixed index on x$kqlfxpl (KQLFXPL_HADD, KQLFXPL_PHAD,KQLFXPL_HASH) and if that index is used, effect of this problem should disappear. Since there is a plan to upgrade to 10g soon, client didn’t want to collect fixed statistics.

So, I worked around the issue. Not an elegant solution, but it avoids problem by recreating the view with hardcoded hash value for every loop.

create or replace view backup.dynamic_plan_table
    as
    select
     rawtohex(address) || '_' || child_number statement_id,
     sysdate timestamp, operation, options, object_node,
     object_owner, object_name, 0 object_instance,
     optimizer,  search_columns, id, parent_id, position,
     cost, cardinality, bytes, other_tag, partition_start,
     partition_stop, partition_id, other, distribution,
   cpu_cost, io_cost, temp_space, access_predicates,
   filter_predicates
    from v$sql_plan 
   where hash_value=&&hv    -- Added this line..
;

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&&hv ), 'serial' ) )
/

Now, it is evident that query runs in 0.03 seconds and also used fixed index.

     51     NESTED LOOPS OUTER (cr=261 r=0 w=0 time=1327 us)
     51      NESTED LOOPS OUTER (cr=131 r=0 w=0 time=892 us)
     51       FIXED TABLE FIXED INDEX X$KQLFXPL (ind:3) (cr=0 r=0 w=0 time=496 us)
     27       TABLE ACCESS BY INDEX ROWID OBJ$ (cr=131 r=0 w=0 time=327 us)
     27        INDEX UNIQUE SCAN I_OBJ1 (cr=104 r=0 w=0 time=212 us)(object id 33)
     27      TABLE ACCESS CLUSTER USER$ (cr=130 r=0 w=0 time=359 us)
     27       INDEX UNIQUE SCAN I_USER# (cr=28 r=0 w=0 time=78 us)(object id 11)

Finally

10g upgrade guide recommends collecting fixed table statistics. So, this issue should completely disappear in 10g.

Posted in CBO, Oracle database internals, Performance tuning | Tagged: , , , , , , , , | 11 Comments »

Systemstate dump analysis: Nocache on high intensive sequences in RAC

Posted by Riyaj Shamsudeen on March 10, 2009

Sequence values are cached in instance memory and by default 20 values are cached. As instance cache is transient, loss of an instance can result in loss of cached sequence values. Permanent record of highest possible value from any instance is kept track in SEQ$ table.

SQLs accessing this sequence within an instance will access instance SGA. As each instance caches its own sequence values it is highly likely that SQLs accessing this sequence from different instance will create gaps in sequence values.

Common knee-jerk reaction to this issue is to set nocache or cache 1 for these sequences. In a single instance environment, this approach will backfire due to massive updates to SEQ$ tables, buffer busy waits, latch free waits etc. In a RAC environment, this issue is magnified and almost hangs the instance. I had the privilege of working with a client to resolve one of their performance issues.

Problem

Instances were hung. It was not possible to login to the database. Many existing connections are working fine though. We were lucky enough that one of the DBAs had active connection to the database. So, we took systemstate dump from that session to see whey there is an hang (or slowness).

Alert log was printing following lines at this time frame. So, we know that there is a problem with row cache enqueue.

Wed Feb 18 06:31:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=45
Wed Feb 18 08:59:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=44
Wed Feb 18 08:59:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=43
Wed Feb 18 09:01:00 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=46
Wed Feb 18 09:01:00 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=27

systemstate dump

We took a systemstate dump with following command.

alter session set events 'immediate trace name systemstate level 4';

Continue Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , | 16 Comments »

High global cache waits on tab$ ?

Posted by Riyaj Shamsudeen on February 25, 2009

Yes, you read it correct. That shocked me too.

I was trying to understand global cache waits for a client. Naturally, I queried statspack tables and analyzed the data for just one day using a script. Surprisingly, tab$ came as top consumer of global cache waits. I was shocked by the revelations and couldn’t believe it! If something doesn’t make sense, look more closely, right?

Global cache waits

Database version is 9i. Statspack table stats$seg_stat is the source for this script and that table is populated from v$segment_stats. So, these column values (global_cache_cu_blocks_served and global_cache_cr_blocks_served) are cumulative. To find statistics for a specific period, we need to subtract column value of prior row from current row. Analytic function lag can be useful for this.

In this analytic function printed below, partitioning clause uses instance_number, startup_time and dataobj#. All rows with same value for these three columns will be considered in one data partition and then rows ordered by snap_id within that data partition. Lag will pull the row from prior snap_id in that partition. Then, we subtract from current value to get the difference. Please refer to this paper: Performance tuning with SQL new features – paper for more information about analytic functions.

  ...
   global_cache_cu_blocks_served -
   lag(global_cache_cu_blocks_served,1,0) over (partition by instance_number,startup_time, dataobj#, obj#
              order by  snap_id ) global_cache_cu_blocks_served,
...

Script and output

Complete script printed below and running that against a database.

Continue Reading

Posted in EBS11i, Oracle database internals, Performance tuning | Tagged: , , , , | 6 Comments »

A stroll through shared pool heaps

Posted by Riyaj Shamsudeen on January 15, 2009

Last week, we were discussing about increasing shared_pool_reserved_size to combat a performance issue(bug) in a conference call. I thought, it was a common knowledge that shared_pool reserved area is part of a shared_pool and surprisingly it is not-so-common.

In this blog, we will discuss about shared_pool and shared_pool reserved area internals. First, we will discuss about details specific to release 9i and then discuss changes in later releases 10g/11g.

oradebug command

We will use oradebug command to dump the heap with level 2. Level 2 is to dump shared_pool heap in to a trace file.

 oradebug setmypid
 oradebug dump heapdump 2

Above command generates a trace file and we will walk through the trace file and review various areas closely.

Parameters

In this test instance, we have a bigger SGA. Shared_pool (6GB) and shared_pool_reserved_size values are printed below.


SQL> show parameter shared_pool
shared_pool_reserved_size            big integer 629145600
shared_pool_size                     big integer 6442450944

Trace file analysis
Continue Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , , | 8 Comments »

Lock table followed by DDL?

Posted by Riyaj Shamsudeen on January 8, 2009

My client DBA wrote a small script: To drop a column to a busy table. This application is mostly an OLTP application and table is modified heavily. It is hard to get a lock on the table, even though DDL to drop the column will be quick completing within sub-seconds. Following script was tried in the middle of night so as to minimize production impact.

set serveroutput on size 100000
declare
begin
      -- lock table in exclusive mode 
      execute immediate 'lock table t1 in exclusive mode';
      dbms_output.put_line ('**** Table locked. Dropping column ****' ) ;
      -- If we lock it, drop the column
      execute immediate 'alter table t1 drop column n5 ';
exception
  when others then
     dbms_output.put_line(' No luck ');
     dbms_output.put_line(' SQL code'||sqlcode||','|| 'error '||sqlerrm );
end;
/

Logic of the above script is a) wait for 5 minutes to lock the table in exclusive mode, before timing out. b) If we acquire exclusive lock on that table, then add a column.

Do you see any issues with this script?

So, did we add that column ?
Keep Reading

Posted in 11g, Performance tuning | Tagged: , , , , , , | 14 Comments »

Cardinality feedback to resolve a Cache buffers chains latch contention issue

Posted by Riyaj Shamsudeen on January 2, 2009

Earlier, I blogged about resolving cache buffers chains latch contention in my earlier entry , in which, root cause was excessive index access due to Nested Loops join. Recently, we resolved another similar issue.

Problem

CPU usage was very high in production database server in user% mode. Dynamic performance view v$session_wait indicated excessive waits for latch contention. Output from a script wait_details.sql shows that many sessions were waiting for ‘latch free’ event. Also, address for these latch children are the same, meaning all these sessions are trying to access one latch children.


SQL> @wait_details

   SID PID     EVENT         USERNAME  STATE               WAIT_TIME   WIS P1_P2_P3_TEXT
------ ------- ------------- --------- ---------- ------------------- --------- ----- -----------------------------------
    91  24242  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   101   4884  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   116  23899  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   187  19499  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   108  23498  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 3
   194  23701  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   202  26254  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 4
   220  23274  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   227  23643  latch free    CSTMOP    WAITED KNOWN TIME           2     0 address 69476807024-number 98-tries 0
   331  26519  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 0
   297  23934  latch free    CSTMOP    WAITING                     0     0 address 69476807024-number 98-tries 3
....

We can identify SQL causing latch contention querying v$session_wait. From the output below, SQL with hash_value 1509082258 is suspicious since there are many sessions executing that SQL and waiting / waited recently for ‘latch free’ event.


select substr(w.event, 1, 28 ) event, sql_hash_value, count(*)
from v$session_wait w, v$session s, v$process p
where s.sid=w.sid
and p.addr = s.paddr
and s.username is not null
and event not like '%pipe%'
and event not like 'SQL*%'
group by substr(w.event, 1, 28), sql_hash_value
;

EVENT                          SQL_HASH_VALUE   COUNT(*)
------------------------------ -------------- ----------
enqueue                               3740270          1
enqueue                             747790152          1
enqueue                            1192921796          1
latch free                          622474477          3
latch free                         1509082258         58 <---
latch free                         1807800540          1
global cache null to x                3740270          1
global cache null to x             1473456670          1
global cache null to x             3094935671          1
db file sequential read             109444956          1

Mapping to object_name
Keep Reading

Posted in CBO, EBS11i, Performance tuning | Tagged: , , , , , , | 4 Comments »

Performance tuning: HugePages and Linux

Posted by Riyaj Shamsudeen on November 13, 2008

Recently we quickly and efficiently resolved a major performance issue with one of our New York clients. In this blog, I will discuss about this performance issue and its solution.

Problem statement

The client’s central database was intermittently freezing because of high CPU usage, and their business severely affected. They had already worked with vendor support and the problem was still unresolved.

Symptoms

Intermittent High Kernel mode CPU usage was the symptom. The server hardware was 4 dual-core CPUs, hyperthreading enabled, with 20GB of RAM, running a Red Hat Linux OS with a 2.6 kernel.

During this database freeze, all CPUs were using kernel mode and the database was almost unusable. Even log-ins and simple SQL such as SELECT * from DUAL; took a few seconds to complete. A review of the AWR report did not help much, as expected, since the problem was outside the database.

Analyzing the situation, collecting system activity reporter (sar) data, we could see that at 08:32 and then at 8:40, CPU usage in kernel mode was almost at 70%. It is also interesting to note that, SADC (sar data collection) also suffered from this CPU spike, since SAR collection at 8:30 completed two minutes later at 8:32, as shown below.

A similar issue repeated at 10:50AM:

07:20:01 AM CPU   %user     %nice   %system   %iowait     %idle
07:30:01 AM all    4.85      0.00     77.40      4.18     13.58
07:40:01 AM all   16.44      0.00      2.11     22.21     59.24
07:50:01 AM all   23.15      0.00      2.00     21.53     53.32
08:00:01 AM all   30.16      0.00      2.55     15.87     51.41
08:10:01 AM all   32.86      0.00      3.08     13.77     50.29
08:20:01 AM all   27.94      0.00      2.07     12.00     58.00
08:32:50 AM all   25.97      0.00     25.42     10.73     37.88 <--
08:40:02 AM all   16.40      0.00     69.21      4.11     10.29 <--
08:50:01 AM all   35.82      0.00      2.10     12.76     49.32
09:00:01 AM all   35.46      0.00      1.86      9.46     53.22
09:10:01 AM all   31.86      0.00      2.71     14.12     51.31
09:20:01 AM all   26.97      0.00      2.19      8.14     62.70
09:30:02 AM all   29.56      0.00      3.02     16.00     51.41
09:40:01 AM all   29.32      0.00      2.62     13.43     54.62
09:50:01 AM all   21.57      0.00      2.23     10.32     65.88
10:00:01 AM all   16.93      0.00      3.59     14.55     64.92
10:10:01 AM all   11.07      0.00     71.88      8.21      8.84
10:30:01 AM all   43.66      0.00      3.34     13.80     39.20
10:41:54 AM all   38.15      0.00     17.54     11.68     32.63 <--
10:50:01 AM all   16.05      0.00     66.59      5.38     11.98 <--
11:00:01 AM all   39.81      0.00      2.99     12.36     44.85

Performance forensic analysis

The client had access to a few tools, none of which were very effective. We knew that there is excessive kernel mode CPU usage. To understand why, we need to look at various metrics at 8:40 and 10:10.

Fortunately, sar data was handy. Looking at free memory, we saw something odd. At 8:32, free memory was 86MB; at 8:40 free memory climbed up to 1.1GB. At 10:50 AM free memory went from 78MB to 4.7GB. So, within a range of ten minutes, free memory climbed up to 4.7GB.

07:40:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached
07:50:01 AM    225968  20323044     98.90    173900   7151144
08:00:01 AM    206688  20342324     98.99    127600   7084496
08:10:01 AM    214152  20334860     98.96    109728   7055032
08:20:01 AM    209920  20339092     98.98     21268   7056184
08:32:50 AM     86176  20462836     99.58      8240   7040608
08:40:02 AM   1157520  19391492     94.37     79096   7012752
08:50:01 AM   1523808  19025204     92.58    158044   7095076
09:00:01 AM    775916  19773096     96.22    187108   7116308
09:10:01 AM    430100  20118912     97.91    218716   7129248
09:20:01 AM    159700  20389312     99.22    239460   7124080
09:30:02 AM    265184  20283828     98.71    126508   7090432
10:41:54 AM     78588  20470424     99.62      4092   6962732  <--
10:50:01 AM   4787684  15761328     76.70     77400   6878012  <--
11:00:01 AM   2636892  17912120     87.17    143780   6990176
11:10:01 AM   1471236  19077776     92.84    186540   7041712

This tells us that there is a correlation between this CPU usage and the increase in free memory. If free memory goes from 78MB to 4.7GB, then the paging and swapping daemons must be working very hard. Of course, releasing 4.7GB of memory to the free pool will sharply increase paging/swapping activity, leading to massive increase in kernel
mode CPU usage. This can lead to massive kernel mode CPU usage.

Most likely, much of SGA pages also can be paged out, since SGA is not locked in memory.

Memory breakdown

The client’s question was, if paging/swapping is indeed the issue, then what is using all my memory? It’s a 20GB server, SGA size is 10GB and no other application is running. It gets a few hundred connections at a time, and PGA_aggregated_target is set to 2GB. So why would it be suffering from memory starvation? If memory is the issue, how can there be 4.7GB of free memory at 10:50AM?

Recent OS architectures are designed to use all available memory. Therefore, paging daemons doesn’t wake up until free memory falls below a certain threshold. It’s possible for the free memory to drop near zero and then climb up quickly as the paging/swapping daemon starts to work harder and harder. This explains why free memory went down to 78MB and rose to 4.7GB 10 minutes later.

What is using my memory though? /proc/meminfo is useful in understanding that, and it shows that the pagetable size is 5GB. How interesting!

Keep Reading

Posted in Performance tuning | Tagged: , , , , , , , | 8 Comments »

Presentations at SIOUG and DOUG

Posted by Riyaj Shamsudeen on November 6, 2008

I recently traveled to Europe to present at a few conferences. The Slovenia Oracle User Group (SIOUG) conducted an Oracle conference in Portoroz, a port city in Adriatic Sea.

Thanks to my friend Joze Senegacnik, I had wonderful time in Slovenia. We also visited the city of Venice while we were there. Of course, we all knew that Venice is the city of bridges—over a hundred small islands connected by bridges. But visiting Venice in person had a dramatic effect on us. We were mesmerized by the beauty, culture, and architecture, in particular the Basilica of St.Mark. Venice is indeed the Queen of the Adriatic.

Back to reality. I presented a few papers for the Dallas Oracle Users Group (DOUG) for their October tech meeting too. All these papers can be accessed following these links:

1. Performance specific new features in 11g

2. Battle of the nodes: RAC Performance myths

3. Cost Based Query Transformations

4. Performance tuning: Scientific approach to bottleneck identification

Also, if you are planning to attend the UKOUG Conference & Exhibition in December, please attend my presentation on “Cost based query transformation” on Thursday of that week.

Posted in Presentations | Tagged: , , , , | 4 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 200 other followers