Oracle database internals by Riyaj

Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

Group by Hash aggregation

Posted by Riyaj Shamsudeen on September 30, 2010

So, Here I was merrily enjoying OpenWorld 2010 presentations in SFO, I got a call from a client about a performance issue. Client recently upgraded from Version 9i to Version 10g in an E-Business environment. I had the privilege of consulting before the upgrade, so we setup the environment optimally, and upgrade itself was seamless. Client did not see much regression except One query: That query was running for hours in 10g compared to 15 minutes in 9i.

Review and Analysis

Reviewed the execution plan in the development database and I did not see any issues with the plan. Execution plan in development and production looked decent enough. I wasn’t able to reproduce the issue in the development database either. So, the client allowed me to trace the SQL statement in the production database. Since the size of data in few tables is different between production and development databases, we had to analyze the problem in production environment.

I had to collect as much data possible as the tracing was a one-time thing. I setup a small script to get process stack and process memory area of that Unix dedicated server process to collect more details, in addition to tracing the process with waits => true.

Execution plan from the production database printed below. [ Review the execution plan carefully, it is giving away the problem immediately.] One execution of this statement took 13,445 seconds and almost all of it spent in the CPU time. Why would the process consume 13,719 seconds of CPU time?. Same process completed in just 15 minutes in 9i, as confirmed by Statspack reports. [ As a side note, We collected enormous amount of performance data in 9i in the Production environment before upgrading to 10g, just so that we can quickly resolve any performance issues, and you should probably follow that guideline too]. That collection came handy and It is clear that SQL statement was completing in 15 minutes in 9i and took nearly 3.75 hours after upgrading the database to version 10g.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       10  13719.71   13445.94         27    5086407          0       99938
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12  13719.71   13445.94         27    5086407          0       99938

     24   HASH GROUP BY (cr=4904031 pr=27 pw=0 time=13240600266 us)
     24    NESTED LOOPS OUTER (cr=4904031 pr=27 pw=0 time=136204709 us)
     24     NESTED LOOPS  (cr=4903935 pr=27 pw=0 time=133347961 us)
 489983      NESTED LOOPS  (cr=3432044 pr=27 pw=0 time=104239982 us)
 489983       NESTED LOOPS  (cr=2452078 pr=27 pw=0 time=91156653 us)
 489983        TABLE ACCESS BY INDEX ROWID HR_LOCATIONS_ALL (cr=1472112 pr=27 pw=0 time=70907109 us)
 489983         INDEX RANGE SCAN HR_LOCATIONS_UK2 (cr=981232 pr=0 pw=0 time=54338789 us)(object id 43397)
 489983        INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=979966 pr=0 pw=0 time=17972426 us)(object id 37657)
 489983       INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=979966 pr=0 pw=0 time=10876601 us)(object id 43498)
     24      INDEX RANGE SCAN UXPP_FA_LOCATIONS_N3 (cr=1471891 pr=0 pw=0 time=27325172 us)(object id 316461)
     24     TABLE ACCESS BY INDEX ROWID PER_ALL_PEOPLE_F (cr=96 pr=0 pw=0 time=2191 us)
     24      INDEX RANGE SCAN PER_PEOPLE_F_PK (cr=72 pr=0 pw=0 time=1543 us)(object id 44403)

pstack, pmap, and truss

Reviewing pstack output generated from the script shows many function calls kghfrempty, kghfrempty_ex, qerghFreeHashTable etc, implying hash table operations. Something to do with hash table consuming time?

 ( Only partial entries shown ) 
 0000000103f41528 kghfrempty 
 0000000103f466ec kghfrempty_ex 
 0000000103191f1c qerghFreeHashTable 
 000000010318e080 qerghFetch 
 00000001030b1b3c qerstFetch 
...
 0000000103f41558 kghfrempty 
 0000000103f466ec kghfrempty_ex 
 0000000103191f1c qerghFreeHashTable 
 000000010318e080 qerghFetch
 00000001030b1b3c qerstFetch

Truss of the process also showed quite a bit of mmap calls. So, the process is allocating more memory to an hash table?

...
mmap(0xFFFFFFFF231C0000, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 7, 0) = 0xFFFFFFFF231C0000
...
pollsys(0xFFFFFFFF7FFF7EC8, 1, 0xFFFFFFFF7FFF7E00, 0x00000000) = 0
mmap(0xFFFFFFFF231D0000, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 7, 0) = 0xFFFFFFFF231D0000
...

Execution plan again ..

Reviewing the execution plan again showed an interesting issue. I am going to post only two relevant lines from the execution plan below. As you can see that elapsed time at NESTED LOOPS OUTER step is 136 seconds. But the elapsed time at the next HASH GROUP BY step is 13240 seconds, meaning nearly 13,100 seconds spent in the HASH GROUP BY Step alone! Why would the process spend 13,100 seconds in a group by operation? Actual SQL execution took only 136 seconds, but the group by operation took 13,100 seconds. That doesn’t make sense, Does it?

     24   HASH GROUP BY (cr=4904031 pr=27 pw=0 time=13240600266 us)
     24    NESTED LOOPS OUTER (cr=4904031 pr=27 pw=0 time=136204709 us)
...

OFE = 9i

Knowing that time is spent in the Group by operation and that the 10g new feature Hash Grouping method is in use, I decided to test this SQL statement execution with 9i optimizer. The SQL completed in 908 seconds with OFE(optimizer_features_enabled) set to 9.2.0.8 (data is little bit different since production is an active environment). You can also see that SORT technique is used to group the data.

alter session set optimizer_features_enabled=9.2.0.8;

Explain plan :
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch   106985    887.41     908.25     282379    3344916        158     1604754
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   106987    887.41     908.25     282379    3344916        158     1604754

      4   SORT GROUP BY (cr=2863428 pr=0 pw=0 time=37934456 us)
      4    NESTED LOOPS OUTER (cr=2863428 pr=0 pw=0 time=34902519 us)
      4     NESTED LOOPS  (cr=2863412 pr=0 pw=0 time=34198726 us)
 286067      NESTED LOOPS  (cr=2003916 pr=0 pw=0 time=24285794 us)
 286067       NESTED LOOPS  (cr=1431782 pr=0 pw=0 time=19288024 us)
 286067        TABLE ACCESS BY INDEX ROWID HR_LOCATIONS_ALL (cr=859648 pr=0 pw=0 time=13568456 us)
 286067         INDEX RANGE SCAN HR_LOCATIONS_UK2 (cr=572969 pr=0 pw=0 time=9271380 us)(object id 43397)
 286067        INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=572134 pr=0 pw=0 time=4663154 us)(object id 37657)
...

Knowing the problem is in the GROUP BY step, we setup a profile with _gby_hash_aggregation_enabled set to FALSE, disabling the new 10g feature for that SQL statement. With the SQL profile, performance of the SQL statement is comparable to pre-upgrade timing.

This almost sounds like a bug! Bug 8223928 is matching with this stack, but it is the opposite. Well, client will work with the support to get a bug fix for this issue.

Summary

In summary, you can use scientific methods to debug performance issues. Revealing the details underneath, will enable you to come up with a workaround quickly, leading to a faster resolution.
Note that, I am not saying hash group by feature is bad. Rather, we seem to have encountered an unfortunate bug which caused performance issues at this client. I think, Hash Grouping is a good feature as the efficiency of grouping operations can be improved if you have ample amount of memory. That’s the reason why we disabled this feature at the statement level, NOT at the instance level.
This blog in a traditional format hash_group_by_orainternals

Update 1:

I am adding a script to capture pmap and pstack output in a loop for 1000 times, with 10 seconds interval. Tested in Oracle Solaris.

#! /bin/ksh
 pid=$1
 (( cnt=1000 ))
 while  [[ $cnt -gt 0 ]];
  do
        date
        pmap -x $pid
        pstack $pid
        echo $cnt
        (( cnt=cnt-1 ))
        sleep 10
  done

To call the script: assuming 7887 is the UNIX pid of the process.
nohup ./pmap_loop.ksh 7887 >> /tmp/a1.lst 2>>/tmp/a1.lst &

Syntax for the truss command is given below. Please remember, you can’t use pmap, pstack and truss concurrently. These commands stops the process (however short that may be!) and inspects them, so use these commands sparingly. [ I had a client who used to run truss on LGWR process on a continuous(!) basis and database used to crash randomly!]. I realize that pmap/pstack/truss can be scripted to work together, but that would involve submitting a background process for the truss command and killing that process after a small timeout window. That would be a risky approach in a Production environment and So, I prefer to use truss command manually and CTRL+C it after few seconds.

truss -d -E -o /tmp/truss.lst -p 7887

I can not stress enough, not to overuse these commands in a Production environment. Command strace( Linux), tusc (HP) are comparable commands of truss(Solaris).

8 Responses to “Group by Hash aggregation”

  1. Yasser said

    Excellent Scientific Approach !!

    Could you please share pstack, pmap, and truss commands in detail, will be helpful for reference.

    -Yasser

  2. Bhavik Desai said

    Hi Riyaj,

    I encountered similar issue in one of my 10g data-ware house.
    As you mentioned, HASH GROUP BY not bad and there are certainly some sets of sort mechanisms where HASH GROUP BY is better than SORT GROUP BY.
    However, i resolved my issue by putting a dummy ORDER BY clause with existing GROUP by.
    Note that, Order BY clause will mostly use SORT GROUP BY.

  3. Thank you for reading my blog.

    Yasser
    I think, I have a presentation about unix tools. I may not have uploaded yet, I will check up on it..

    Bhavik
    Thanks for sharing your experience. I guess, introducing a mandatory SORT option might avoid HASH GROUP BY feature in 10g and 11g. But, In future, Oracle might introduce grouping + sorting using HASH methods though.

    Thanks
    Riyaj

  4. Walter said

    Funny you mentioned this post as we are having the opposite issue on our 11.1.0.7 environment. Its mentioned in bug 8619361 and sort group by is taking an awful amount of time.

    Walter.

  5. Walter said

    Riyaj,

    Your post is interesting as we are suffering from the exact opposite, we want optimizer to perform HASH GROUP BY whereas it is performing SORT GROUP BY on our 11.1.0.7 database. Metalink has filed a bug for this as well

    Note: 8619631.8

    Walter.

    • Hello Walter
      Thanks for reading my blog. For some reason, your comments went in to spam folder, sorry about that.
      I read that bug 8619631. Interesting issue. How much is the performance difference between the SORT GROUP BY and HASH GROUP BY in your case? Is this related to HASH GROUP BY using hashing method matching with partition boundaries? or something else? Would you please share some details?

      Cheers
      Riyaj

  6. Group By

    I posted a question to Jonathan regarding the performance of “hash group by” operation in 10gR2 and 11g.

    Jonathan started a poll to find out how many DBA actually disables this feature in 10g and 11g.

Leave a reply to Walter Cancel reply