What a response! Thanks everyone. I'm going to consolidate some replies here... On Saturday 13 October 2007 13:09:37 nordi wrote:
Suse 10.0, runlevel 5: 511.4 Suse 10.0, runlevel 1: 920.7
Suse 10.3, runlevel 2: 385.9 Suse 10.3, runlevel 1: 756.9
This is _very_ strange. Usually I would say the benchmark is broken, but the benchmark simply starts a shell script that starts some GNU utilities. There's not much you can break here.
Please note that the benchmarks themselves haven't been touched in 10 years at least (all my work has been on the framework around them). There could be all sorts of weirdness in there. But I don't think so, they're really very simple (too simple, really).
Can someone confirm that running in runlevel 1 yields much higher benchmark scores?
Yes: as the USAGE file says: When running the tests, I do *not* recommend switching to single-user mode ("init 1"). This seems to change the results in ways I don't understand, and it's not realistic (unless your system will actually be running in this mode, of course). No idea why, though. On Saturday 13 October 2007 13:43:37 Anders Johansson wrote:
But yes, the benchmark is broken. I haven't looked in any great detail at what it does, but how it measures it is just wrong.
In theory, it runs for 60 seconds, and at the end it counts how many iterations it has managed to do in that time, averaged over a couple of runs
The problem is that it never checks if it has run for 60 seconds. It sets up a signal handler for SIGALRM, and just assumes that when the process receives that signal, the 60 seconds are up and it's time to report.
Not true. Most test don't report the time taken; the Run script measures the *actual* time the test runs for, and uses that figure. This is not ideal, because it includes the program start-up and shutdown in the test score, but that's how it's always been. I'm considering changing it, though; I already did for the FS tests. On Saturday 13 October 2007 16:24:53 Lew Wolfgang wrote:
I didn't try Ian's benchmarks, but I did fiddle around a bit with a floating-point intensive one that I've been using for years. It calculates very long FFT's and displays the accuracy.
Bottom line is I didn't see any significant differences between runlevels 1 and 5. The benchmark ran in 8.7 seconds as measured by "time".
It did run a bit faster in 10.3 than 10.2. However, this wasn't a fair test since my 10.2 is 32-bit, my 10.3 64-bit on the same computer
Then it's meaningless, I'm afraid. I'm seeing a 10-15% speedup just from running the 64-bit version of Linux as opposed to the 32-bit version, which would compensate for the slowdown in 10.3. Compare like with like. See: http://www.hermit.org/Linux/Benchmarking/ My test shows dhrystone *faster* on 10.3, and double-precision whetstone about the same. So it's no surprise that your FP test shows now slowdown. The slowdown I saw was in context switching, shell scripts (dramatically) and system calls (dramatically). OK, next batch... ;-) Ian -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org