Monday, May 31, 2004
Tuesday, May 25, 2004
One thing I did find mentioned was the use of BULK bind and collections in PL/SQL. It just so happens that I was inspired by an example in Connor McDonald et als new book, to look at the resource requirements and performance implications of this technique. The script I have used can be found here. The main lesson to learn is that processing multiple rows can indeed gain you significant performance improvements - the main resource used appears to be memory. There is a law of diminishing returns however and it therefore probably makes sense for most people to use the LIMIT clause and limit to a few hundred rows at most. You don't want many session all using huge amounts of ram for an extra couple of % performance improvement.
My testing also appears to indicate that the performance improvements in 10g are significant in this area - but again you pay the price in memory even as compared to 9. Both the tests below are done on the same laptop with similaraly configured databases. The figures shown are extracts from v$mystats (check out the script for the exact meaning). NB the figure for memory in version 10 is probably inaccurate for the first few results (until it starts increasing).
9.2.0.5 first
NIALL 25-MAY-2004 14:00@nl9204>exec bulk_test
Rows:1
- Time: 867
- Max Mem: 369004
- CPU: 0
Rows:4
- Time: 295
- Max Mem: 451504
- CPU: 0
Rows:16
- Time: 134
- Max Mem: 451504
- CPU: 0
Rows:64
- Time: 94
- Max Mem: 451504
- CPU: 0
Rows:256
- Time: 86
- Max Mem: 501192
- CPU: 0
Rows:1024
- Time: 85
- Max Mem: 632800
- CPU: 0
Rows:4096
- Time: 86
- Max Mem: 879448
- CPU: 0
Rows:16384
- Time: 97
- Max Mem: 1964240
- CPU: 0
Rows:65536
- Time: 99
- Max Mem: 6515112
- CPU: 0
Rows:262144
- Time: 98
- Max Mem: 24750552
- CPU: 0
Rows:1048576
- Time: 109
- Max Mem: 46750112
- CPU: 0
PL/SQL procedure successfully completed.
10.1.0
NIALL 25-MAY-2004 14:12@nl1010>exec bulk_test
Rows:1
- Time: 834
- Max Mem: 1743436
- CPU: 0
Rows:4
- Time: 238
- Max Mem: 1743436
- CPU: 0
Rows:16
- Time: 91
- Max Mem: 1743436
- CPU: 0
Rows:64
- Time: 54
- Max Mem: 1743436
- CPU: 0
Rows:256
- Time: 45
- Max Mem: 1743436
- CPU: 0
Rows:1024
- Time: 41
- Max Mem: 1808972
- CPU: 0
Rows:4096
- Time: 43
- Max Mem: 2136652
- CPU: 0
Rows:16384
- Time: 50
- Max Mem: 3381836
- CPU: 0
Rows:65536
- Time: 55
- Max Mem: 8493644
- CPU: 0
Rows:262144
- Time: 58
- Max Mem: 28940876
- CPU: 0
Rows:1048576
- Time: 59
- Max Mem: 53647948
- CPU: 0
PL/SQL procedure successfully completed.
Monday, May 24, 2004
First I should know better than to make blanket statements as I did yesterday.
Never update a row at a time in a loop should read something like Never update a row at a time in a loop if you can accomplish the same result by operating on the set of records. For example
begin
for c in (select col1,col2,col3 from tab1) loop
update tab2 set col1 = c.col1||c.col2||mod(col3,1000)
where col2 = c.col1;
end loop;
commit;
end;
Would be better written using set operations.
Second.
I have in the past listed 'the optimizer can ignore hints' as a myth. This is because the optimizer doesn't ignore hints it follows them when considering the range of available access paths (this assumes that the hint is valid and correctly specified). This can be seen quite easily from a 10053 trace. The problem usually is either that the hint is invalid, or that insufficient hints have been specified. Never the less it turns out that with 10g the optimizer can ignore hints. This is because a new hidden parameter _optimizer_ignore_hints can be set to true - the default is false. In addition the 10053 trace has been enhanced so that adjusted parameter file parameters show up at the top and (what looks like) the whole range of default parameters show up in the trace.
Saturday, May 22, 2004
1. Please use the appropriate command. It is not necessary to drop and recreate a table to add a column.
2. If you were going to drop and recreate a table consider that Oracle will think that the new table is very small until you collect stats, so please do collect stats or at least allow the poor sap in charge of the program to catch the ddl and add a stats collection task in.
3. don't update a row at a time in a loop.
that is all
Friday, May 14, 2004
create index idx_desc on tab(col1 desc)
. It turned out to be an interesting journey. In summary
For all of these reasons descending indexes will end up larger, perhaps significantly, than their ascending cousins.
Tuesday, May 11, 2004
Wednesday, May 05, 2004
A server parameter file (SPFILE) can be thought of as a repository for initialization parameters that is maintained on the machine where the Oracle database server executes. It is, by design, a server-side initialization parameter file. Initialization parameters stored in a server parameter file are persistent, in that any changes made to the parameters while an instance is running can persist across instance shutdown and startup. This eliminates the need to manually update initialization parameters to make changes effected by ALTER SYSTEM statements persistent. It also provides a basis for self tuning by the Oracle database server.
Sounds good doesn't it.
There are a number of drawbacks that I have found though.
SQL> alter system set db_cache_size=50m scope=spfile;
System altered.
SQL> alter system set db_block_buffers=2000 scope=spfile;
System altered.
SQL> startup force;
ORA-00381: cannot use both new and old parameters for buffer cache size
specification
As a result you will want to backup your spfile regularly - this is obviously common sense, but if you forgot to do this with the text based file you could at least edit the file.
The preferred option is to use RMAN which will handle this for you quite nicely, if you aren't using rman then you can export your server parameter file using the new sql command
create pfile from spfile
. However, I ran into this today
SQL> create pfile from spfile;
create pfile from spfile
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [733], [538970160], [pga heap], [], [], [], [], []
The 733 argument shows that is essentially an out of memory condition, the second argument is the amount of memory in bytes requested!
In short unless you actually have to use a server parameter file - think RAC, you may consider the potential drawbacks to outweigh the advantages.