We had a discussion about import performance in this otn forum . During the discussion, OP raised a doubt that import will resort to single row inserts for tables with date columns. Buffer parameter specifies, essentially, size of this array for array inserts.
We know that if a table has lob columns, then import parameter buffer is not honored and import utility resorts to single row inserts for those tables. But, claim here is, tables with date columns also suffers from single row inserts. We will probe this further and validate that claim in this blog.
Let’s create a table and populate 300K rows.
create table t1 (n1 number, v1 varchar2 (512), d1 date); insert into t1 select n1, lpad(n1, 500, 'x'), sysdate from (select level n1 from dual connect by level <=300003); commit; REM Creating an export file.. host exp userid=cbqt/cbqt file=exp_t1.dmp log=exp_t1.log tables=t1
Above code fragment created a table, inserted 300,000 rows and exported that table to an export dump file. This dump file is ready to be imported. But, we need to trace the import to measure the effect of buffer parameter. Problem is that how to we trace import session alone, without generating every session in the database? This can be achieved by creating a logon trigger as below. Only sessions from a test user will have trace enabled from this trigger (username is cbqt).
REM I could potentially , use "on schema clause too, but this is part of a generic code that I use. REM Riyaj Shamsudeen - To trace a session through logon trigger create or replace trigger set_system_event after logon on database declare v_user dba_users.username%TYPE:=user; sql_stmt1 varchar2(256) :='alter session set events '||chr(39)||'10046 trace name context forever, level 12'||chr(39); begin if (v_user = 'CBQT') THEN execute immediate sql_stmt1; end if; end; /
Let’s drop the table, import with a default buffer size of 64K. Through logon trigger a new sql trace file will be generated. That trace file will be analyzed with tkprof utility as shown in the code fragment below:
drop table t1; imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log commt=Y full=Y tkprof orcl11g_ora_3840.trc orcl11g_ora_3840.trc.out sort=execpu, fchcpu
From the tkprof output file generated, pertinent lines are printed below. Insert statement was executed 5455 times which works out to be an average array size of 157 rows.
SQL ID : c9nv9yq6w2ydp INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("N1", "V1", "D1") VALUES (:1, :2, :3) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 5455 15.06 20.10 108 43261 212184 300003 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 5456 15.06 20.10 108 43261 212184 300003 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 88 Rows Row Source Operation ------- --------------------------------------------------- 0 LOAD TABLE CONVENTIONAL (cr=7 pr=0 pw=0 time=0 us)
Let’s repeat this test case for a buffer size of 1MB.
sqlplus cbqt/cbqt <<EOF drop table t1; EOF imp userid=cbqt/cbqt file=exp_t1.dmp log=imp_t1.log buffer=1048576 commt=Y full=Y tkprof orcl11g_ora_3846.trc orcl11g_ora_3846.trc.out sort=execpu, fchcpu
Trace lines from the tkprof output file for 1MB test case shown below: