From markl@hep.ucl.ac.uk Fri Mar 16 23:55:52 2001 Date: Sat, 9 Dec 2000 16:23:30 +0000 ( ) From: Mark Lancaster To: r.stdenis@physics.gla.ac.uk Cc: Bill Ashmanskas , Marjorie Shapiro , Jim Amundson , wolbers@fnal.gov, watts@physics.rutgers.edu, stefano.belforte@ts.infn.it, rharris@fnal.gov, ksmcf@fnal.gov, sexton@fnal.gov, Rob Snihur Subject: Linux ORACLE timings ... Hello Rick, Well unless there is a feature in the API there should have only have been one connect - take a look at the code - it's only 10 lines and I run the same exe for msql and oracle except I pass in the OTL identifier vs the msql identifier. I only initialise the manager once; so unless the connect-re-connect inside DBManager has gone crazy then the connection shouldn't be a factor. Code is on : fcdgsi2:~markl/dbase/DB/DBUtils/timings/ Well even if there was an oracle backup going on, doesn't that illustrate quite nicely how ORACLE can hammer your machine. I ran the code on the Linux ORACLE machine ncdf16 - so a fair test : Insert time 1000*1 row = 122 sec (vs 5 for msql) Read time 1000*1 row = 46 sec (vs 17 for msql) Feel free to run the code yourself on the Linux Oracle server. Cheers Mark > These are incredibly bad numbers, and if anything cdfondev is actually a > higher spec machine than cdfonprd, both of which likely to be higher spec > than a home institute. These numbers look like you were doing a lot of > connects and disconnects and, as you measured before, this is about 1s > each. so I would have expected 1000s and you got 539s. Is this what you > did (before I speculate further?). > > Also, if you want to run this kind of test you do need to ensure that > there is no backup going on at the time on the oracle database machine (in > this case, the int or dev instance). Finally, I think that the right > machine to test on is our linux box, ncdf16, a lower-end pc (300 MHz I > think) with 40G. Also, if we have to modify the front end to reduce > connects, then we should be careful to test it that way and note the extra > software development/licensing issues involved. > > On the linux machine we have loaded 60% of the run2 estimated data file > catalog size in simulation. At that point the box came to a grinding > hault. We are investigating why this is. So another test to make is just > that: to take one run2's worth of data and load the boxes to see how they > perform on a typical run. Otherwise we face nasty surprises. > > cheers > rick >