Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

Archive for the ‘Oracle database internals’ Category

COLLABORATE 2009 presentation: 11g performance new feature.

Posted by Riyaj Shamsudeen on May 6, 2009

I just presented about Oracle 11g new features specific to performance in COLLABORATE 2009, Orlando Florida. You can download presentation and paper from here:

11g performance specific new features – presentation
11g performance specific new features – paper

I met couple of new friends and many familiar faces.

Catherin Devlin introduced sqlpython tool to me. This tool is an user friendly replacement for sqlplus with many UNIX like goodies. This tool will be an useful addition to command line tools. You can see installation instructions for sqlpython here.

I also attended an interesting application performance panel discussion with fellow Oakie Mark W Farnheim, Mike Brown and Sandra Vucinic. We discussed few customer issues.

An interesting problem discussion is worth mentioning. A client clones production database to development database using full rapidclone methodology. But, access plans for a SQL statement is different between production and development, even though everything was exactly the same. I am hoping, that client will send 10053 trace files from both databases and I will blog about it if I get anything from that client. I think, bind peeking is causing the plan difference, but need to analyze 10053 trace file to confirm that. Another possibility is that, may be , he is not cloning Oracle Software and some software patches are not existing in cloned environment. Need trace files to proceed further here.

Of course, I met few other friends Jeremy Schneider, Dan Norris to mention few.

Posted in 11g, CBO, EBS11i, Oracle database internals, Performance tuning, Presentations | Tagged: , , , , , , , , , , | Leave a Comment »

Dynamic_plan_table, x$kqlfxpl and extreme library cache latch contention

Posted by Riyaj Shamsudeen on March 13, 2009

We had an interesting latch contention issue with a client database worth sharing. Client complained that they are seeing increased library cache latch contention in production with no logical explanation (yet). So, client wanted us to find the root cause. This performance issue seems to have started recently and occurs intermittently and also resolves by itself. Client is planning to upgrade to 10g in 4 weeks time frame and this issue need to be resolved to avoid any delays in 10g upgrade.

From statspack analysis and various statistics analysis, it is visible that there is an increase in library cache latch contention, but nothing jumps up as a problem. Of course, this is a very active application [ 5 node RAC application] with very high number of literal SQLs but that is nothing new, it is always been this way. No new dynamic SQLs, no DDLs invalidating cursors or anything of that sort. I started watching production instance, in real time, waiting for the re-occurrence of this issue. It occurred!

latchprofx for rescue

At this very juncture, instance was suffering from latch contention and It was visible that sessions were starting to queue up for latches.

I was ready with necessary tools. Tanel Poder has written a great tool latchprofx to sample v$latch_holder with high frequency [ Actually, it uses x$ tables but you get the idea ]. In this case, high frequency sampling is a must since this performance problem disappears in few minutes or so. Executed latcprofx script for library cache latches and 10000 samples as printed below.

SQL>  @/tmp/latchprofx  sid,name % library 10000

-- LatchProfX 1.06 by Tanel Poder ( http://www.tanelpoder.com )

  SID NAME                                      Held  Held %     Held ms
------ ----------------------------------- ---------- ------- -----------
   140 library cache                             9555   95.55   25167.870
   185 library cache                              198    1.98     521.532
   245 library cache                               97     .97     255.498
   185 library cache pin                           69     .69     181.746
   245 library cache pin                           39     .39     102.726
   240 library cache                               33     .33      86.922
   240 library cache pin                            6     .06      15.804
   227 library cache                                2     .02       5.268
   227 library cache pin                            1     .01       2.634

Yikes. We can see that one session holding library cache latch in 95.5% of samples. That seems bit odd.

Of course, there are many latch children for library cache latches. Is that session repeatedly acquiring just one latch children? Or is it acquiring many different latch children? More importantly what is that session executing ?

Latchprofx to the rescue again. Latchprofx can also print latch children address. Re-executing latcprofx script with modified second argument to the call. Only difference between these two calls is that first argument to latchprofx includes laddr now.


@/tmp/latchprofx  sid,name,laddr % library 10000

   SID NAME                                LADDR                  Held  Held %     Held ms
------ ----------------------------------- ---------------- ---------- ------- -----------
   140 library cache                       0000000990FEFA18        298    2.98     783.442
   140 library cache                       0000000990FED2B8        270    2.70     709.830
   140 library cache                       0000000990FEF298        254    2.54     667.766
   140 library cache                       0000000990FEF658        254    2.54     667.766
   140 library cache                       0000000990FEECF8        240    2.40     630.960
   140 library cache                       0000000990FF0198        236    2.36     620.444
   140 library cache                       0000000990FEE488        225    2.25     591.525
   140 library cache                       0000000990FEFCE8        223    2.23     586.267
   140 library cache                       0000000990FEEED8        222    2.22     583.638
   140 library cache                       0000000990FEF388        222    2.22     583.638
   140 library cache                       0000000990FEDEE8        220    2.20     578.380
   140 library cache                       0000000990FF00A8        216    2.16     567.864
   140 library cache                       0000000990FEE758        215    2.15     565.235
   140 library cache                       0000000990FEEFC8        212    2.12     557.348
   140 library cache                       0000000990FEE938        211    2.11     554.719
   140 library cache                       0000000990FEE668        205    2.05     538.945
   140 library cache                       0000000990FEDB28        201    2.01     528.429
....

Whoa! This session was acquring nearly all latch children. Notice that latch addresses are different in every row output. Meaning, various samples of latchprofx saw that this session is holding a different latch children. In a nutshell, this session was acquiring and releasing ALL library cache latches ( aka children) in quasi sequential fashion and causing massive performance issue. There are 53 latch children for library cache latch (in this database). Column held% is approximately 2%, indicating all library cache latch children were acquired and released in a quasi-uniform distributed(2% times 50), quasi-cyclical fashion! Same pattern repeates if I reduce or increase samples.

Latch greedy session

Initially, I thought, this is a bug. I have not seen a case where all library cache latch children were needed to execute a SQL or a process causing massive latch contention. [ Okay, I confess, I have simulated that kind of issue in my test database holding library cache latch children using oradebug kslgetl stuff, but that is a different topic altogether]. Session 140 was executing a SQL printed below. This SQL is querying execution plan from shared_pool for a specific hash_value in this 9i database( Equivalent to dbms_xplan.display_cursor stuff in 10g.]

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&hv ), 'serial' ) )
/

Turned on sqltrace on that session and tkprof showed that recursive SQL generated from dbms_xplan package as a costly SQL.

REM Removing few lines to improve readability.
SELECT /* EXEC_FROM_DBMS_XPLAN */ id, position, level , operation, options, object_name , cardinality, bytes, temp_space,
        cost, io_cost, cpu_cost ,decode(partition_start, 'ROW LOCATION',
....
    		from BACKUP.DYNAMIC_PLAN_TABLE start  with id = 0
            and  timestamp >= 
		(select max(timestamp) from BACKUP.DYNAMIC_PLAN_TABLE where id=0  and 
			statement_id = '00000009BB9B6D50_0' and nvl(statement_id, ' ') not like 'SYS_LE%')
            and  nvl(statement_id, ' ') not like 'SYS_LE%' and statement_id = '00000009BB9B6D50_0' connect by (prior id = parent_id
                      and prior nvl(statement_id, ' ') =
                                nvl(statement_id, ' ')
                      and prior timestamp <= timestamp)
                  or (prior nvl(object_name, ' ') like 'SYS_LE%'
                      and  prior nvl(object_name, ' ') =
                                 nvl(statement_id, ' ')
                      and id = 0 and prior timestamp <= timestamp)
          order siblings by id
/

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       30     53.79      55.90        423     375837          0          29
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       32     53.82      55.93        423     375837          0          29


X$KQLFXPL

55 seconds to execute this SQL? That seems bit excessive. This SQL is the only time-consuming recursive SQL generated from the select statement. So, somehow, this must be causing latch contention. I will print only relevent lines from the plan output.

....
|   4 |     NESTED LOOPS OUTER           |             |     1 |  4479 |    21   (5)|
|   5 |      NESTED LOOPS OUTER          |             |     1 |  4475 |    20   (5)|
|*  6 |       FIXED TABLE FULL           | X$KQLFXPL   |     1 |  4444 |            | <---- FTS
|   7 |       TABLE ACCESS BY INDEX ROWID| OBJ$        |     1 |    31 |     3  (34)|
|*  8 |        INDEX UNIQUE SCAN         | I_OBJ1      |     1 |       |     2  (50)|
|   9 |      TABLE ACCESS CLUSTER        | USER$       |     1 |     4 |     2  (50)|
|* 10 |       INDEX UNIQUE SCAN          | I_USER#     |     1 |       |            |
....

   6 - filter("P"."INST_ID"=:B1 AND "P"."KQLFXPL_HADD""P"."KQLFXPL_PHAD")

Line #6 is the problem. This table x$kqlfxpl is accessed using Full table scan. Looks like, x$kqlfxpl can NOT be accessed without holding library cache latches. Since there is a FTS on this fixed table, almost every library cache object need to be touched which potentially means all library cache objects must be inspected. Further, accessing this x$ table means library cache buckets (_kgl_bucket_count controlled area) need to be walked, latches acquired and released before that memory area is inspected. Makes sense!

Repeated calls to kqlfxp, kglic, kglic0 also visible in the stack output. These calls suggests that library cache latches were acquired and released. Systemstate dumps confirms that too.

I was also able to reproduce this issue querying x$kqlfxpl in a cloned database.

SQL>  select /*+ full(a) */  count(*) from x$kqlfxpl a;

  COUNT(*)
----------
     48926

-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     1 |       |    18   (6)|
|   1 |  SORT AGGREGATE      |             |     1 |       |            |
|   2 |   FIXED TABLE FULL   | X$KQLFXPL   |   100 |       |            |
-------------------------------------------------------------------------

Running latchprofx script from another session concurrently shows that accessing this x$ table will result in excessive library cache latching activity. Row estimates column for this table shows 100 which is a CBO default cardinality estimate if there are no statistics on a table.


@/tmp/latchprofx  sid,name,laddr % library 10000

   SID NAME                                LADDR                  Held  Held %     Held ms
------ ----------------------------------- ---------------- ---------- ------- -----------
   127 library cache                       0000000990FED2B8         26     .26      68.120
   127 library cache                       0000000990FEFEC8         24     .24      62.880
   127 library cache                       0000000990FEF658         23     .23      60.260
   127 library cache                       0000000990FEE668         22     .22      57.640
   127 library cache                       0000000990FEFA18         22     .22      57.640
...

Background

Now, a little bit background is necessary. What started all this? As a part of 10g upgrade planning, Client DBAs had setup a job to capture costly SQLs and store execution plans from shared_pool using dynamic_plan_table script, borrowed from Tom kyte dynamic_plan_table discussion. Excellent idea and wonderfully written sql, Tom.

If you don’t know already, Tom’s method creates following view. Client script passes the hash_value of costly SQLs from v$sql to this view.

create or replace view dynamic_plan_table
    as
    select
     rawtohex(address) || '_' || child_number statement_id,
     sysdate timestamp, operation, options, object_node,
     object_owner, object_name, 0 object_instance,
     optimizer,  search_columns, id, parent_id, position,
     cost, cardinality, bytes, other_tag, partition_start,
     partition_stop, partition_id, other, distribution,
   cpu_cost, io_cost, temp_space, access_predicates,
   filter_predicates
    from v$sql_plan
;

REM there is a wrapper script calling following SQL, in a loop, for top 20 SQL statements at a stretch. 
REM Of course, there is more logic in that script.

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&hv ), 'serial' ) )
/

Unfortunately, such an awesome method has side effects in version 9.2.0.8, as explained above.

At this point, I am almost sure that we need to collect statistics on fixed tables to resolve this issue. There is a fixed index on x$kqlfxpl (KQLFXPL_HADD, KQLFXPL_PHAD,KQLFXPL_HASH) and if that index is used, effect of this problem should disappear. Since there is a plan to upgrade to 10g soon, client didn’t want to collect fixed statistics.

So, I worked around the issue. Not an elegant solution, but it avoids problem by recreating the view with hardcoded hash value for every loop.

create or replace view backup.dynamic_plan_table
    as
    select
     rawtohex(address) || '_' || child_number statement_id,
     sysdate timestamp, operation, options, object_node,
     object_owner, object_name, 0 object_instance,
     optimizer,  search_columns, id, parent_id, position,
     cost, cardinality, bytes, other_tag, partition_start,
     partition_stop, partition_id, other, distribution,
   cpu_cost, io_cost, temp_space, access_predicates,
   filter_predicates
    from v$sql_plan 
   where hash_value=&&hv    -- Added this line..
;

select plan_table_output from TABLE( dbms_xplan.display ( 'backup.dynamic_plan_table',
                (select rawtohex(address)||'_'||child_number x from v$sql where hash_value=&&hv ), 'serial' ) )
/

Now, it is evident that query runs in 0.03 seconds and also used fixed index.

     51     NESTED LOOPS OUTER (cr=261 r=0 w=0 time=1327 us)
     51      NESTED LOOPS OUTER (cr=131 r=0 w=0 time=892 us)
     51       FIXED TABLE FIXED INDEX X$KQLFXPL (ind:3) (cr=0 r=0 w=0 time=496 us)
     27       TABLE ACCESS BY INDEX ROWID OBJ$ (cr=131 r=0 w=0 time=327 us)
     27        INDEX UNIQUE SCAN I_OBJ1 (cr=104 r=0 w=0 time=212 us)(object id 33)
     27      TABLE ACCESS CLUSTER USER$ (cr=130 r=0 w=0 time=359 us)
     27       INDEX UNIQUE SCAN I_USER# (cr=28 r=0 w=0 time=78 us)(object id 11)

Finally

10g upgrade guide recommends collecting fixed table statistics. So, this issue should completely disappear in 10g.

Posted in CBO, Oracle database internals, Performance tuning | Tagged: , , , , , , , , | 11 Comments »

Systemstate dump analysis: Nocache on high intensive sequences in RAC

Posted by Riyaj Shamsudeen on March 10, 2009

Sequence values are cached in instance memory and by default 20 values are cached. As instance cache is transient, loss of an instance can result in loss of cached sequence values. Permanent record of highest possible value from any instance is kept track in SEQ$ table.

SQLs accessing this sequence within an instance will access instance SGA. As each instance caches its own sequence values it is highly likely that SQLs accessing this sequence from different instance will create gaps in sequence values.

Common knee-jerk reaction to this issue is to set nocache or cache 1 for these sequences. In a single instance environment, this approach will backfire due to massive updates to SEQ$ tables, buffer busy waits, latch free waits etc. In a RAC environment, this issue is magnified and almost hangs the instance. I had the privilege of working with a client to resolve one of their performance issues.

Problem

Instances were hung. It was not possible to login to the database. Many existing connections are working fine though. We were lucky enough that one of the DBAs had active connection to the database. So, we took systemstate dump from that session to see whey there is an hang (or slowness).

Alert log was printing following lines at this time frame. So, we know that there is a problem with row cache enqueue.

Wed Feb 18 06:31:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=45
Wed Feb 18 08:59:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=44
Wed Feb 18 08:59:31 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=43
Wed Feb 18 09:01:00 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=46
Wed Feb 18 09:01:00 2008
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=27

systemstate dump

We took a systemstate dump with following command.

alter session set events 'immediate trace name systemstate level 4';

Continue Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , | 18 Comments »

High global cache waits on tab$ ?

Posted by Riyaj Shamsudeen on February 25, 2009

Yes, you read it correct. That shocked me too.

I was trying to understand global cache waits for a client. Naturally, I queried statspack tables and analyzed the data for just one day using a script. Surprisingly, tab$ came as top consumer of global cache waits. I was shocked by the revelations and couldn’t believe it! If something doesn’t make sense, look more closely, right?

Global cache waits

Database version is 9i. Statspack table stats$seg_stat is the source for this script and that table is populated from v$segment_stats. So, these column values (global_cache_cu_blocks_served and global_cache_cr_blocks_served) are cumulative. To find statistics for a specific period, we need to subtract column value of prior row from current row. Analytic function lag can be useful for this.

In this analytic function printed below, partitioning clause uses instance_number, startup_time and dataobj#. All rows with same value for these three columns will be considered in one data partition and then rows ordered by snap_id within that data partition. Lag will pull the row from prior snap_id in that partition. Then, we subtract from current value to get the difference. Please refer to this paper: Performance tuning with SQL new features – paper for more information about analytic functions.

  ...
   global_cache_cu_blocks_served -
   lag(global_cache_cu_blocks_served,1,0) over (partition by instance_number,startup_time, dataobj#, obj#
              order by  snap_id ) global_cache_cu_blocks_served,
...

Script and output

Complete script printed below and running that against a database.

Continue Reading

Posted in EBS11i, Oracle database internals, Performance tuning | Tagged: , , , , | 6 Comments »

A stroll through shared pool heaps

Posted by Riyaj Shamsudeen on January 15, 2009

Last week, we were discussing about increasing shared_pool_reserved_size to combat a performance issue(bug) in a conference call. I thought, it was a common knowledge that shared_pool reserved area is part of a shared_pool and surprisingly it is not-so-common.

In this blog, we will discuss about shared_pool and shared_pool reserved area internals. First, we will discuss about details specific to release 9i and then discuss changes in later releases 10g/11g.

oradebug command

We will use oradebug command to dump the heap with level 2. Level 2 is to dump shared_pool heap in to a trace file.

 oradebug setmypid
 oradebug dump heapdump 2

Above command generates a trace file and we will walk through the trace file and review various areas closely.

Parameters

In this test instance, we have a bigger SGA. Shared_pool (6GB) and shared_pool_reserved_size values are printed below.


SQL> show parameter shared_pool
shared_pool_reserved_size            big integer 629145600
shared_pool_size                     big integer 6442450944

Trace file analysis
Continue Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , , | 8 Comments »

Log file synch tuning #2

Posted by Riyaj Shamsudeen on November 24, 2008

After reading my earlier blog about log file sync tuning a reader sent an interesting issue, worth blogging about it.

Problem

Excessive wait time for ‘log file sync’ event while wait time for ‘log file parallel write’ is minimal. See statspack top events below

 

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                                     % Total
Event                                               Waits    Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time                                                       18,435    31.06
PL/SQL lock timer                                   4,370      12,137    20.45
log file sync                                      57,939       8,565    14.43  <-----
db file sequential read                         1,303,477       7,030    11.85
db file scattered read                            505,147       3,992     6.73

                                                                   Avg
                                                     Total Wait   wait    Waits
Event                               Waits   Timeouts   Time (s)   (ms)     /txn
---------------------------- ------------ ---------- ---------- ------ --------
...
log file sync                      57,939      6,571      8,565    148      0.5
...
log file parallel write           215,112          0      1,138      5      1.9
...

Average wait time for ‘log file sync’ event is 148ms and it is 5ms for ‘log file parallel write’ event. There is an oddity here. Wait time for ‘log file sync’ event seems bit excessive compared to ‘log file parallel write’ event. Ratio between number of waits for ‘log file sync’ and ‘log file parallel write’ is approximately 4. In an average case scenario, if each commit resulted in 4 log writes, then I would expect an Average wait time of 20ms for ‘log file sync’ event. Wait time of 148ms? Needs further analysis.

pstack, pmap and more

Database is running in a Solaris 10 server and database version 9.2.0.8. Since this is a Solaris operating environment, we can use proc utilities to debug this further. We used a script to dump pstack and pmap of LGWR in a loop. This wasn’t much help, but we could guess that LGWR was waiting for I/O since there were many lines in the stack with kaio and aio wait calls. Few lines from pstack output printed below.

6924:	ora_lgwr_ERPPRD2
-----------------  lwp# 1 / thread# 1  --------------------
 ffffffff7ddd4838 semsys   (4, 3, ffffffff7fffd5cc, 1, ffffffff7fffd5b8)
 00000001005b2694 ksliwat (0, 3c0, 77e8a7790, 1037c0b30, 3, 0) + 274
..
-----------------  lwp# 2 / thread# 2  --------------------
 ffffffff7ddd3a34 kaio     (6, 0, ffffffff7def7f80, 10, ffffffff7def9c18, ffffffff7f700a00)
-----------------  lwp# 3 / thread# 3  --------------------
 ffffffff7ddd2f1c lwp_park (0, 0, 0)
 ffffffff7ddcc748 cond_wait_queue (1038e5a70, 1038e5a80, 0, 0, ffffffff7def9d08, 0) + 28
 ffffffff7ddccca8 cond_wait (1038e5a70, 1038e5a80, 0, 0, 1c00, 0) + 10
 ffffffff7db03a60 _aio_idle (1038e5a00, 1, 1038e5a80, 0, ffffffff7def9c18, 0) + 28
 ffffffff7db034fc _aio_send_sigev (1038e5a00, 0, 104b60, ffffffff7ddd2cc8, ffffffff7db03498, ffffffff7dc08000) + 64

dtruss

I/O statistics did not provide much clarity either. We need to find if LGWR is suffering from a performance issue. To see if LGWR is suffering from any OS related issues, we need to trace system calls from LGWR and Truss utility provides such a facility. Suffering from truss related paranoia, we didn’t want to run truss against LGWR since that can affect performance, more importantly database stability.

Fortunately, dtruss came handy. dtruss is based upon dtrace utility and by design, dtrace is safe in Solaris Sparc platform.

Keep Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , | 14 Comments »

Performance issue: High Kernel mode CPU usage

Posted by Riyaj Shamsudeen on October 31, 2008

Recently, we resolved a performance issue with one of our esteemed clients. Problem was interesting and worth blogging about it.

Problem
An application makes 300 static connections to database DB1 in database server, say CLNTDB server. Application relies on database link and over a period of time, each session executes SQL through a database link, creating a connection in central database PROD1. So, there are 300 connections in PROD1 database coming from DB1 database through database links. Performance is fine during normal operation.

Problem starts when the application is shutdown. Shutting down the application in DB1, creates massive CPU consumption in PROD1DB server. Unfortunately, this spike in CPU usage lasts for 5-10 seconds and causes ASM heartbeat to fail. Considering that PROD1 is a central database and this has a global effect on application(s) functionality. See presentation below for graphical representation of this problem.

Symptoms
Looking at symptoms in detail, it is visible that CPU usage in sys mode. It is not uncommon to see high CPU usage during a storm of disconnects. But, this specific CPU usage is much higher and uses all CPUs in kernel mode. If this is a problem due to process destruction, then this will show up in CLNTDB server too as there are 300 disconnects there too. But, this problem manifests only in PROD1DB server.

mpstat output from PROD1DB server shows that CPU usage in %sys mode. Notice the numbers below under sys column. almost all CPUs are used in sys mode. We need to drill down further to understand why this much CPUs are used in sys mode. It is also visible that no other columns have any abnormal higher values.


CPU ..intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0    554  237  651  219   87  491    0  4349    9  91   0   0
  1    911    0 2412  591  353  630    0 15210   30  63   0   7
  2    313    0  613  106  190  105    0  3562    8  90   0   2
  3    255    0  492   92  161  530    0  2914    6  92   0   2
  4     86    1    2    3    1   63    0     8    0 100   0   0
  5    662    0 1269  153  326  211    0  6753   13  77   0  10
  6    349    0  589   54  170 1534    0  3002    7  88   0   5
  7    317   81  411  105   51   61    0  2484    4  93   0   3
  8   6423 6321  697   36  148  546    0  3663    6  86   0   9
  9    363    0  639   41  154 1893    0  3214    9  85   0   6
 10    456    0  964  194  193   77    0  5797   10  81   0   8
 11    104    0   42    3    9  183    0   207    0  95   0   5
 12    195    0  279  110   31   80    0  1590    3  97   0   0
 13    449    0  844  117  158  127    0  4486    7  85   0   8

Test case
Let’s analyze this issue for a single connection to simplify the problem. Created a test user in DB1 database, created a private database link from DB1 to PROD1 and then executed a select statement over that database link.

In db1:

create user test1 identified by test1;
grant connect, resource to test1;
conn test1/test1
create dtabase ink proddb
connect to test1 identified by test1
using 'proddb';

select * from dual@proddb;

At this point, a database link connection has been created in PROD1 database. Querying v$session in PROD1 database to find session created for database link. SID 4306 is the session created for our test connection from db1 database.

select sid, serial#, LOGON_TIME,LAST_CALL_ET from v$session where
  logon_time > sysdate-(1/24)*(1/60) and machine='machine_name_here'
/
       SID    SERIAL# LOGON_TIME           LAST_CALL_ET
---------- ---------- -------------------- ------------
      4306      51273 12-SEP-2008 20:47:29            1

Associating this session to v$process, found UNIX process for this session. Now, we will start truss on this UNIX process( Solaris platform).

truss -p -d -o /tmp/truss.log

In DB1 executed: exit

This will disconnect session in PROD1 database.

Truss output
Truss output contains activity generated for disconnect in PROD1 server. Reading through truss output, we can see that system call (shmdt) consumed CPU time. Each shmdt call consumes approximately 24ms from output below (18.5053-18.4807=0.0242). We are using -d flag as above to print time spent in that call. Of course, this is a system call and so this CPU usage will be in kernel mode.


18.4630 close(10)                                       = 0
18.4807 shmdt(0x380000000)                              = 0
18.5053 shmdt(0x440000000)                              = 0
18.5295 shmdt(0x640000000)                              = 0
18.5541 shmdt(0x840000000)                              = 0
18.5784 shmdt(0xA40000000)                              = 0
18.6026 shmdt(0xC40000000)                              = 0
18.6273 shmdt(0xE40000000)                              = 0
18.6512 shmdt(0x1040000000)                             = 0
18.6752 shmdt(0x1240000000)                             = 0
18.6753 shmdt(0x1440000000)                             = 0

So, one disconnect executes 10 system calls and consumes approximately 0.24 CPU seconds in kernel mode. Shmdt calls are used to detach from shared memory segment. Since there are 10 shared memory segments ( as visible in ipcs -ma), 10 shmdt calls are executed per session disconnect.

Projecting this calculation for 300 connections, CPU consumption will be for approximately 72 seconds in total. With 16 CPUs, at least, 5-6 seconds will be used in kernel mode, assuming linear projections[ But, in practice, due to mutex calls and such this may not be linear and will be over 72 seconds]. This is matching with our observation: 5-10 seconds of CPU consumption in kernel mode. First thing, we need to do is reduce # of shmdt calls. One way is to reduce # of shared memory segments.

Shared memory segments
We thought that SHMMAX kernel parameter is limiting since SGA size was bigger than SHMMAX size. After changing SHMMAX parameter, restarting server and such, still many shared memory segments were created.

That’s interesting! TRUSSing instance startup to understand why multiple shared memory segments are created.

Few lines from truss output shows calls to system calls _lgrpsys and pset_bind.


4.5957 munmap(0xFFFFFD7FFDAE0000, 32768)               = 0
4.5958 lgrp_version(1, )                               = 1
4.5958 _lgrpsys(1, 0, )                                = 42
4.5958 _lgrpsys(3, 0x00000000, 0x00000000)             = 19108
4.5959 _lgrpsys(3, 0x00004AA4, 0x06399D60)             = 19108
4.5959 _lgrpsys(1, 0, )                                = 42
4.5960 pset_bind(PS_QUERY, P_LWPID, 4294967295, 0xFFFFFD7FFFDFB11C) = 0
4.5960 pset_info(PS_MYID, 0x00000000, 0xFFFFFD7FFFDFB0D4, 0x00000000) = 0
4.5961 pset_info(PS_MYID, 0x00000000, 0xFFFFFD7FFFDFB0D4, 0x061AA2B0) = 0

_lgrpsys calls indicates that there is some form of NUMA activity going on here. pset_bind is used to bind a thread or process to a specific processor set.

Enter NUMA.

NUMA or Locality groups

Keep Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , | 13 Comments »

How to find objects creating nologging changes?

Posted by Riyaj Shamsudeen on September 11, 2008

In an Oracle-l thread, a question was raised: How to find objects creating nologging changes?

What is a ‘nologging’ change?

Redo logging mechanism plays critical role in media recovery. Media recovery relies on archivelog files generated to roll forward the database. Standby database or dataguard recovery also relies on archivelog files. It is possible to do DML changes with minimal logging, aka nologging changes or direct mode inserts. For example, insert /*+ append */ can be used to populate the rows in to a table without generating much redo. This can invalidate standby database and might trigger rebuilding some or all parts of the standby database.

Nologging changes generates minimal redo, since the blocks are pre-formatted and written to disk directly. A redo record is generated invalidating a range of affected blocks. This invalidation redo record size is far smaller, for e.g. hundreds of blocks can be invalidated using just a single redo record. Of course, recovery is severely affected as the changes performed with nologging operations can NOT be reapplied / recovered.

Internals of nologging changes

Since nologging is all about redo records, dumping redo log file or archivelog file is a concrete way to see what happens under the hood. Let’s consider an example to explain internals of nologging changes.

We will create a table, insert rows, and review redo records closely.

  create table t4 ( a number) nologging tablespace users;
  REM switch log file so that we wil have smaller log files to dump.
  alter system switch logfile;
  REM Direct mode or nologging inserts
  insert /*+ append */ into t4 select object_id from dba_objects;
  commit;
  alter system switch logfile;

Prior online redo log file contains nologging changes from table t4 above. We could dump that log file in its entirety, but we will dump just a layer specific to direct mode changes alone to limit trace file size. Layer 19 is for nologging changes. Following script will dump last online redo log file for layer 19.

 set serveroutput on size 1000000
 declare
   v_sqltext varchar2(255);
 begin
   -- Find name of latest but one log file.
   select 'alter system dump logfile '||chr(39)||
                member||chr(39) || ' layer 19 '
   into v_sqltext
   from
   v$log  lg, v$logfile lgfile
   where lg.group# = lgfile.group# and
   lg.sequence# =
       (select sequence#-1 from v$log where status='CURRENT' )
   and rownum <2;
   dbms_output.put_line ('Executing :'||v_sqltext);
   --Execute above SQL to dump log file and print it to see.
   execute immediate v_sqltext;
  end;
/

alter system dump logfile 'D:\ORACLE\ORADATA\ORCL11G\REDO02.LOG' layer 19

Above command generated a trace file in user_dump_dest directory and few redo records from that trace file printed below:

...
REDO RECORD - Thread:1 RBA: 0x0000b0.0000000f.00d4 LEN: 0x0034 VLD: 0x01
SCN: 0x0000.00486397 SUBSCN:  1 09/03/2008 10:54:38
CHANGE #1 INVLD AFN:4 DBA:0x01038824 BLKS:0x0001 OBJ:72852 SCN:0x0000.00486397 SEQ:  1 OP:19.2
Direct Loader invalidate block range redo entry

REDO RECORD - Thread:1 RBA: 0x0000b0.00000014.0118 LEN: 0x0034 VLD: 0x01
SCN: 0x0000.0048639a SUBSCN:  1 09/03/2008 10:54:38
CHANGE #1 INVLD AFN:4 DBA:0x01038832 BLKS:0x0001 OBJ:72852 SCN:0x0000.0048639a SEQ:  1 OP:19.2
Direct Loader invalidate block range redo entry
...

Explanation of redo records

Let’s review following two lines.

CHANGE #1 INVLD AFN:4 DBA:0x01038832 BLKS:0x0001 OBJ:72852 SCN:0x0000.0048639a SEQ:  1 OP:19.2
Direct Loader invalidate block range redo entry

OP:19.2 indicates that layer is 19 for this change vector and 2 is the opcode. This vector also specifies that BLKS=1 and DBA as 1038832. Essentially this change vector says “Invalidate range of blocks starting at block DBA 0x01038832 for 1 block”. This could be for a range of blocks too.

For nologging changes a block range invalidation redo generated, block formatted, populated with rows and written directly to disk. But, standby database and media recovery rely on redo records to replay these changes. When recovery code encounters above direct loader invalidate redo record, it throws a warning to alert log and simply mark that range of blocks as invalid. Any attempt to access those blocks will throw ORA-1578 block corruption error.

Just to reiterate, standby and media recovery are affected by nologging changes. But, Current database is still fine and there is no corruption in primary database. If we backup the primary database again or rebuild that tablespace in standby database, we avoid corruption. Point is that, if there is a business need to have nologging changes, consider taking a backup immediately after nologging changes. Of course, if there is a standby database involved, then that tablespace need to be re-copied from production.

Back to our problem

We need to find the object causing nologging changes. Field OBJ:72852 shows object_id in decimal. We can query dba_objects to find object_name.

select owner, object_name from dba_objects where
object_id=72852 or data_object_id=72852
SQL> /
OWNER                          OBJECT_NAM
------------------------------ ----------
CBQT                           T4

In essence, procedure to find object_name causing nologging generation is:

    1. Find archivelog containing nologging changes. If timestamp is known, then v$archived_log can be queried to find archivelog file name.
    2. Dump archivelog file for layer 19: alter system dump logfile ‘filename’ layer 19;
    3. Locate redo record with opcode 19.2.
    4. Query dba_objects for that object_id.More information about redo internals can be found in redo internals doc

Posted in Oracle database internals, Performance tuning, Uncategorized | Tagged: | 14 Comments »

Import performance: Does import of date column resorts to single row inserts, like lob columns ?

Posted by Riyaj Shamsudeen on August 3, 2008

We had a discussion about import performance in this otn forum . During the discussion, OP raised a doubt that import will resort to single row inserts for tables with date columns. Buffer parameter specifies, essentially, size of this array for array inserts.

We know that if a table has lob columns, then import parameter buffer is not honored and import utility resorts to single row inserts for those tables. But, claim here is, tables with date columns also suffers from single row inserts. We will probe this further and validate that claim in this blog.

Let’s create a table and populate 300K rows.

 create table t1 (n1 number, v1 varchar2 (512), d1 date);
   
 insert into t1
 select n1, lpad(n1, 500, 'x'), sysdate
 from (select level n1 from dual connect by level <=300003);
 commit;

 REM Creating an  export file..
 host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1

Above code fragment created a table, inserted 300,000 rows and exported that table to an export dump file. This dump file is ready to be imported. But, we need to trace the import to measure the effect of buffer parameter. Problem is that how to we trace import session alone, without generating every session in the database? This can be achieved by creating a logon trigger as below. Only sessions from a test user will have trace enabled from this trigger (username is cbqt).


REM I could potentially , use "on schema clause too, but this is part of a generic code that I use.
REM Riyaj Shamsudeen - To trace a session through logon trigger
create or replace trigger
set_system_event
after logon  on database
declare
v_user dba_users.username%TYPE:=user;
sql_stmt1 varchar2(256) :='alter session set events '||chr(39)||'10046 trace name context forever, level 12'||chr(39);
begin
  if (v_user = 'CBQT') THEN
      execute immediate sql_stmt1;
  end if;
end;
/

Let’s drop the table, import with a default buffer size of 64K. Through logon trigger a new sql trace file will be generated. That trace file will be analyzed with tkprof utility as shown in the code fragment below:

drop table t1;

imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log commt=Y full=Y

tkprof orcl11g_ora_3840.trc orcl11g_ora_3840.trc.out sort=execpu, fchcpu

From the tkprof output file generated, pertinent lines are printed below. Insert statement was executed 5455 times which works out to be an average array size of 157 rows.

 

SQL ID : c9nv9yq6w2ydp
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") 
VALUES
 (:1, :2, :3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   5455     15.06      20.10        108      43261     212184      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     5456     15.06      20.10        108      43261     212184      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=7 pr=0 pw=0 time=0 us)

Let’s repeat this test case for a buffer size of 1MB.

sqlplus cbqt/cbqt <<EOF
drop table t1;
EOF

imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=1048576 commt=Y full=Y 

tkprof orcl11g_ora_3846.trc orcl11g_ora_3846.trc.out sort=execpu, fchcpu

Trace lines from the tkprof output file for 1MB test case shown below:

Keep Reading

Posted in Oracle database internals, Performance tuning | Tagged: , , , , , , , | 1 Comment »

Tuning latch contention: Cache buffers chain latches

Posted by Riyaj Shamsudeen on July 30, 2008

Recently, I had an opportunity to tune latch contention for cache buffers chain (CBC) latches. Problem statement is that high CPU usage combined with poor application performance. Quick review of statspack report of 15 minutes showed a latch free wait as top event and consuming 3600 seconds approximately, in a 8 CPU server. Further CPU usage was quite high, which is a typical symptom of latch contention, due to spinning involved. v$session_wait showed that hundreds of sessions were waiting for latch free event.

SQL> @waits10g

   SID PID     EVENT         P1_P2_P3_TEXT
------ ------- ------------  --------------------------------------
   294  17189  latch free    address 15873156640-number 127-tries 0
   628  17187  latch free    address 15873156640-number 127-tries 0
....
   343  17191  latch free    address 15873156640-number 127-tries 0
   599  17199  latch: cache  address 17748373096-number 122-tries 0
               buffers chains
   337  17214  latch: cache  address 17748373096-number 122-tries 0
               buffers chains
.....
   695  17228  latch: cache  address 17748373096-number 122-tries 0
               buffers chains
....
   276  15153  latch: cache  address 19878655176-number 122-tries 1
               buffers chains

We will use two pronged approach to find root cause scientifically. First, we will find SQL suffering from latch contention and objects associated with access plan for that SQL. Next, we will find buffers involved in latch contention, map that back to objects. Finally, we will match these two techniques to pinpoint root cause.

Before we go any further, let’s do a quick summary of internals of latch operations.

Brief Introduction to CBC latches and not-so-brief reason why this is a complicated topic to discuss briefly

Latches are internal memory structures to coordinate access to shared resources. Locks aka enqueues are different from latches. Key difference is that enqueues, as name suggests, provides a FIFO queueing mechanisms and latches do not provide a queueing mechanism. On the other hand, latches are held very briefly and locks are usually held longer.

In Oracle SGA, buffer cache is the memory area data blocks are read in to, aka buffer cache. [If ASMM - Automatic Shared Memory Management is in use, then part of Shared pool can be tagged as KGH:NO ALLOC and remapped to buffer cache area too].

Each buffer in the buffer cache has an associated element the buffer header array, externalized as x$bh. Buffer headers keeps track of various attributes and state of buffers in the buffer cache. This Buffer header array is allocated in shared pool. These buffer headers are chained together in a doubly linked list and linked to a hash bucket. There are many hash buckets (# of buckets are derived and governed by _db_block_hash_buckets parameter). Access (both inspect and change) to these hash chains are protected by cache buffers chains latches.

Further, buffer headers can be linked and delinked from hash buckets dynamically.

Simple algorithm to access a buffer is: (I had to deliberately cut out so as not to deviate too much from our primary discussion.)

  1. Hash data block address (DBA: Combination of tablespace, file_id and block_id) to find hash bucket.
  2. Get latch protecting hash bucket.
  3. If (success) then Walk the hash chain reading buffer headers to see if a specific version of the block is already in the chain.
  4. If found, access the buffer in buffer cache, with protection of buffer pin/unpin actions.
    If not found, then find a free buffer in buffer cache, unlink the buffer header for that buffer from its current chain, link that buffer header with this hash chain, release the latch and read block in to that free buffer in buffer cache with buffer header pinned.

  5. If (not success) spin for spin_count times and go to step 2.
  6. If this latch was not got with spinning, then sleep, with increasing exponential back-off sleep time and go to step 2.

Obviously, latches are playing crucial role controlling access to critical resources such as hash chain. My point is that repeated access to few buffers can increase latch activity.

There are many CBC latch children (derived by size of buffer cache). Parameter _db_block_hash_latches control # of latches and derived based upon buffer cache size. Further, In Oracle 10g, sharable latches are used and inspecting an hash chain needs to acquire latches in share mode, which is compatible with other shared mode operations. Note that these undocumented parameters are usually sufficient and changes to these parameters must get approval from Oracle support.

Back to our problem…

Let’s revisit our problem at hand. Wait graph printed above shows that this latch contention is caused by two types of latches. Latch # 127 is simulator lru latch and #122 is cache buffers chains latch.
Keep Reading

Posted in CBO, Oracle database internals, Performance tuning | Tagged: , , , , | 17 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 210 other followers