Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

inmemory: Why did that table was not populated in the column store?

Posted by Riyaj Shamsudeen on September 11, 2014

I enabled an huge 70G table for inmemory population, I expected the inmemory population to take a while, but the population didn’t complete even after letting it run for a day. Why?

ASH data

Initial review of the server shows no issues, no resource starvation. This must be a problem with Oracle processes itself. I started digging further, and ASH data shows that in numerous samples the process was seen reading block using single block I/I calls. Also object_id matches with the table I was trying to populate.

   select * from (
    select start_time, end_time, sql_id,event, current_obj#,  cnt_on_cpu + cnt_waiting tot_cnt,
  	rank () over ( order by (cnt_on_cpu + cnt_waiting) desc ) rnk
    from  (
  	  min(start_time) start_time,
  	  max(end_time)   end_time,
  	  sum(decode(session_state,'ON CPU',1,0))  cnt_on_cpu,
 	  sum(decode(session_state,'WAITING',1,0)) cnt_waiting
     ( select
 	first_value(sample_time) over( order by sample_time ) start_time,
 	last_value(sample_time) over( order by sample_time
 				rows between unbounded preceding and unbounded following ) end_time,
 	sql_id,event, session_state, current_obj#
 	     (select * from v$active_session_history ash where session_id= &&sid and session_serial#=&&serial_number)
   group by sql_id, event, current_obj#
  where rnk 
------------------------- ------------------------- ------------- ------------------------------ ------------ ---------- ----------
18-AUG-14 AM 18-AUG-14 AM		  db file sequential read	       168967	     990	  1
												       168967	     156	  2
								  direct path read		       168967	      50	  3
						    bdwtqttka2w2y					   -1	       3	  4
						    bdwtqttka2w2y direct path read		       168967	       1	  5
						    24uqc4aqrhdrs				       168967	       1	  5
													   -1	       1	  5

That doesn’t make sense! Whole table must be loaded in to the column store, why would the session initiate huge amount of single block reads? I expected multi-block reads similar to a full table scan. So, I used Tanel’s snapper tool to understand the statistics and time model statistics.

- Session Snapper v2.01 by Tanel Poder ( )

    SID, USERNAME  , TYPE, STATISTIC                               ,         DELTA, HDELTA/SEC,    %TIME, GRAPH
    119, (W000)    , STAT, table scan rows gotten                  ,        214554,      6.92k,
    119, (W000)    , STAT, table scan blocks gotten                ,          1825,      58.87,
    119, (W000)    , STAT, table fetch by rowid                    ,             1,        .03,
    119, (W000)    , STAT, table fetch continued row               ,          4107,     132.48,
    119, (W000)    , STAT, index scans kdiixs1                     ,             2,        .06,
    119, (W000)    , STAT, IM prepopulate CUs                      ,             1,        .03,
    119, (W000)    , STAT, IM prepopulate bytes from storage       ,      25165824,     811.8k,
    119, (W000)    , STAT, IM prepopulate accumulated time (ms)    ,         77305,      2.49k,
    119, (W000)    , STAT, IM prepopulate bytes in-memory data     ,       9962722,    321.38k,
    119, (W000)    , STAT, IM prepopulate bytes uncompressed data  ,      44530632,      1.44M,
    119, (W000)    , STAT, IM prepopulate rows                     ,        378657,     12.21k,
    119, (W000)    , STAT, IM prepopulate CUs memcompress for query,             1,        .03,
    119, (W000)    , STAT, session cursor cache hits               ,             2,        .06,
    119, (W000)    , STAT, buffer is not pinned count              ,         26142,     843.29,
    119, (W000)    , STAT, parse count (total)                     ,             2,        .06,
    119, (W000)    , STAT, execute count                           ,             2,        .06,
    119, (W000)    , TIME, background IM prepopulation elapsed time,      32113087,      1.04s,   103.6%, |@@@@@@@@@@|
    119, (W000)    , TIME, background cpu time                     ,       3033539,    97.86ms,     9.8%, |@         |
    119, (W000)    , TIME, background IM prepopulation cpu time    ,       3014541,    97.24ms,     9.7%, |@         |
    119, (W000)    , TIME, background elapsed time                 ,      32131726,      1.04s,   103.7%, |@@@@@@@@@@|
    119, (W000)    , WAIT, db file sequential read                 ,      28170073,   908.71ms,    90.9%, |@@@@@@@@@@|
    119, (W000)    , WAIT, direct path read                        ,        601828,    19.41ms,     1.9%, |@         |


I see that 90% of the time spent on single block reads, which matches with ASH metrics. W000 process is processing about 6920 rows per second, it could do much better. But, look closely, and check ‘table fetch continued row’ statistics (4th row in the output). About 132 chained rows per second was processed! Chained rows could cause huge number of single block reads. But I expected that the whole table to be read similar to a Full table scan skipping the chained rows. Full tables scan will read the whole table using Full scan, when it encounters the tail pieces of that row later it will be matched with head row, but that’s not what happened with inmemory population.

Looks like, inmemory worker processes must follow the chained row: Because the table is populated in parallel by many worker processes and a worker process need to read the whole row to populate column store, it can not wait until the chained row’s tail block to be read. So, the worker process will simply follow the chain. That’s the reason why all worker processes were triggering huge amount of single block read calls and populating the table very, very slowly, almost painful to watch. So, if you are converting to use inmemory option, make sure to check for chained rows. Of course, you need to use ‘analyze table list chained rows’, as chain_cnt column is not maintained by dbms_stats package.

We know the reason for chained rows in that table. We reorg’ed it to remove row chaining. After the reorg, table was populated quickly.

Update: Minor edit to fix a few grammatical mistake.

3 Responses to “inmemory: Why did that table was not populated in the column store?”

  1. vaurob said


    very interesting post.
    However I am only 99% convinced that the root cause was the row-chaining. :))))

    – The effects of the reorg may have been a some reduction of the number of blocks of the table. Do you have the block numbers or the size of the table before/after the reorg? This may have contributed to the speedup.

    – Potentially many blocks of the table were already in the buffer cache, which could have “split” the multiblock reads into single block reads, provided that IM prepopulation uses the buffer cache at all. Can you confirm this latter?

    – It would be interesting to see the snapper output of the “fast” IM preload.

    Waiting for your feedback.

    • Hi
      – Reorg reduced the segment size by less than a GB, but that is nothing compered to the size of these objects.
      – This is a clone of the production database. I was waiting for the objects to be loaded in to inmemory. Nothing connected to the database. In fact, I had made a typo in the tnsnames.ora file and the application connected only a few days later after fixing the error.
      – Only change between the restarts is to reorg those tables. That completely eliminated the loading issue.
      – Now, tables are loading just fine. I may not be able to isolate that table alone. I could do disable/reenable trick, however, that will be populated by trickle population and works differently. Let me see if I can get one for you.

      Still, I can be wrong (it has happened before🙂 ). But, the data suggests this as an issue.

      • vaurob said

        Thanks for the feedback! +1 percentage point🙂
        Also you made me google trickle population:). Reading you is a good way of learning.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: