[...] RMOUG, etc., and I am also an Oak Table member. Some of my papers can be found on my personal blog: my papers and presentations. Bookmark online using:These icons link to social bookmarking sites where readers can share and [...]
Pipelined parallel processing will improve performance. It is a performance specific feature. This feature is quite useful for multi-stage application processing. I know of a DBA who implemented pipelined functions to improve performance.
I am not sure why would you think that pipelined parallel functions would favor indexes, but if you can provide your test case, definitely I can research that. What version of Oracle?
These pipelined functions are especially useful in multi-cpu environment. I don’t know how it will interact with standard Edition. I just looked it up and I don’t see any thing specific to pipelined functions and standard edition though.
Excellent presentation/material on Oracle11g Performance related features. I was wondering to see if you happen to figure out what the lock type DO really means or indicates. I also noticed that there is another lock type AE in 11g for which I didn’t find any reference.
Great stuff. I hope you don’t mind my asking – how did you discover that LMS won’t serve CR blocks until redo is flushed on the serving node? I think I have encountered a situation at work that fits that theory. I need to collect statistics from a few more work cycles before I can be sure. So far it looks as if freeing up lgwr on a high-dml instance is causing a significant reduction in gc cr waits in other instances. Does that sound right to you? Or did I perhaps misunderstand your point?
Your understanding is precisely correct. I dealt with a situation: A costly, parallel DML batch process was allowed to run in one node with the idea that effect in other nodes will be minimal. To our surprise, GC CR waits increased multi-fold causing excessive performance issues. Eventually, nodes became so unusable that we ended up killing parallel DML (which in turn caused massive rollback, increased redo and that is a story for another day).
We tested this scenario in our pre-production test bed and we were able to prove that CR log flush waits by LMS is what caused issues in other nodes, with our test scenarios.
Either way, workload characteristics (printed in first part of statspack or AWR report ) should help you here (somewhat). Specifically, values for the statistics “global cache cr block flush time” in the serving node will be much higher. Obviously, statspack/AWR report are printing averages, which in turn , can hide real problems. So, querying few of these global cache statistics from v$sysstat, every second or so, and analyzing that raw data would provide clear picture.
Also read here about this statistics in this glossary glossary.
[...] a paper/presentation by Riyaj Shamsudeen entitled Battle of the Nodes: RAC Performance Myths (avaiable here). As I was looking through it I saw one example that struck me as very odd (Myth #3) and I [...]