Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

Import performance: Does import of date column resorts to single row inserts, like lob columns ?

Posted by Riyaj Shamsudeen on August 3, 2008

We had a discussion about import performance in this otn forum . During the discussion, OP raised a doubt that import will resort to single row inserts for tables with date columns. Buffer parameter specifies, essentially, size of this array for array inserts.

We know that if a table has lob columns, then import parameter buffer is not honored and import utility resorts to single row inserts for those tables. But, claim here is, tables with date columns also suffers from single row inserts. We will probe this further and validate that claim in this blog.

Let’s create a table and populate 300K rows.

 create table t1 (n1 number, v1 varchar2 (512), d1 date);
   
 insert into t1
 select n1, lpad(n1, 500, 'x'), sysdate
 from (select level n1 from dual connect by level <=300003);
 commit;

 REM Creating an  export file..
 host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1

Above code fragment created a table, inserted 300,000 rows and exported that table to an export dump file. This dump file is ready to be imported. But, we need to trace the import to measure the effect of buffer parameter. Problem is that how to we trace import session alone, without generating every session in the database? This can be achieved by creating a logon trigger as below. Only sessions from a test user will have trace enabled from this trigger (username is cbqt).


REM I could potentially , use "on schema clause too, but this is part of a generic code that I use.
REM Riyaj Shamsudeen - To trace a session through logon trigger
create or replace trigger
set_system_event
after logon  on database
declare
v_user dba_users.username%TYPE:=user;
sql_stmt1 varchar2(256) :='alter session set events '||chr(39)||'10046 trace name context forever, level 12'||chr(39);
begin
  if (v_user = 'CBQT') THEN
      execute immediate sql_stmt1;
  end if;
end;
/

Let’s drop the table, import with a default buffer size of 64K. Through logon trigger a new sql trace file will be generated. That trace file will be analyzed with tkprof utility as shown in the code fragment below:

drop table t1;

imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log commt=Y full=Y

tkprof orcl11g_ora_3840.trc orcl11g_ora_3840.trc.out sort=execpu, fchcpu

From the tkprof output file generated, pertinent lines are printed below. Insert statement was executed 5455 times which works out to be an average array size of 157 rows.

 

SQL ID : c9nv9yq6w2ydp
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") 
VALUES
 (:1, :2, :3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   5455     15.06      20.10        108      43261     212184      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     5456     15.06      20.10        108      43261     212184      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=7 pr=0 pw=0 time=0 us)

Let’s repeat this test case for a buffer size of 1MB.

sqlplus cbqt/cbqt <<EOF
drop table t1;
EOF

imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=1048576 commt=Y full=Y 

tkprof orcl11g_ora_3846.trc orcl11g_ora_3846.trc.out sort=execpu, fchcpu

Trace lines from the tkprof output file for 1MB test case shown below:

 

SQL ID : c9nv9yq6w2ydp
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") 
VALUES
 (:1, :2, :3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    157     10.40      19.41         76      42012     200594      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      158     10.40      19.41         76      42012     200594      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=231 pr=0 pw=0 time=0 us)

Number of executions for the insert statement went down from 5855 to 157. Average array size went up from 51 rows to 1910 rows. Buffer size increase from 64KB to 1MB increased average array size.

Repeating this test for 10MB and 100MB size shows that executions reduced to 16 and 10 respectively. This Proves that import buffer parameter is honored for tables with date columns also.

 

10MB:

SQL ID : c9nv9yq6w2ydp
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") 
VALUES
 (:1, :2, :3)

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     16      9.50      17.78         96      42332     200178      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       17      9.50      17.78         96      42332     200178      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=2548 pr=0 pw=0 time=0 us)


100MB:

SQL ID : c9nv9yq6w2ydp
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") 
VALUES
 (:1, :2, :3)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     10      9.18      18.98         96      42117     199908      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       11      9.18      18.98         96      42117     199908      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=4269 pr=0 pw=0 time=0 us)

Let’s repeat a subset of this case for tables with lob columns.

 REM at this point, trigger is enabled. So, possible that performance might not be great.. Drop trigger
 REM recreate later, if needed..
 drop table t1;
 create table t1 (n1 number, v1 varchar2 (512), c1 clob);
   
 insert into t1
 select n1, lpad(n1, 500, 'x'), lpad (n1, 10, 'x') -- on purpose keeping clob column to be small.
 from (select level n1 from dual connect by level <=300003);
 commit;
 REM Creating an  export file..
 host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1
 drop table t1;
 imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=104857600 commt=Y full=Y 
 
 tkprof orcl11g_ora_396.trc orcl11g_ora_396.trc.out sort=execpu, fchcpu
 

Lines from the tkprof output file for the test case above:

SQL ID : a92gcz9gxjuqh
INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "C1") 
VALUES
 (:1, :2, :3)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 300003    185.17     193.95        159       4228    1079227      300003
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   300004    185.17     193.95        159       4228    1079227      300003

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 88  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=3 pr=0 pw=0 time=0 us)

It’s clear, for lob columns single row inserts are used. That’s why import performance is poor for lob columns. But, date column do NOT suffer from any such performance issue.

One Response to “Import performance: Does import of date column resorts to single row inserts, like lob columns ?”

  1. Hi Riyaj,

    thank you for sharing so much information on your blog. I am learning something new every day i read your blog.

    As you probably know, in 10g you can use a simpler logon trigger which only fires when a specific schema logs on:

    create or replace trigger enable_sql_trace
    after logon on CBQT.SCHEMA
    BEGIN
    dbms_monitor.session_trace_enable(null,null,true,true);
    end;
    /

    Regards,
    Martin

Leave a reply to Martin Decker Cancel reply