On Friday, 16 March 2018 9:33 tomtomme wrote:
Am 15.03.2018 um 22:52 schrieb Jan Engelhardt:
On Thursday 2018-03-15 22:00, Cristian Rodríguez wrote:
El 15-03-2018 a las 13:52, Ianseeks escribió:
This benchmarks are not particulary useful to developers or to take any remediation steps if needed. they appear to be provided purely for entertaiment.
That is exactly the scope of Phoronix.
Care to explain how those network-benchmarks could be improved? The discussed benchmark you are slandering is only a preview, the full comparison is in the works. It was requested and payd for by a user, so it must be useful to him.
As a Tumbleweed user I enjoy those benchmarks, so entertaining it is, yes. But at the same time they deliver also vaild data points, sometimes on a small scale (hardware wise), and sometimes with nice big comparisons when it comes to gaming / gpus.
Phoronix reviews and benchmark are famous... I remember Mr. Larabel's "review" of an openSUSE conference in 2012(?) focusing mostly on the fact that there were problems with food supply on the welcome party. :-) Benchmarking is hard. If you want to do it right, that is. It means thinking about how to do the setup to minimize random influence of other factors. Careful thinking about what you really want to measure and how to do it for the results to be relevant. Analyzing the results to make sure they make sense. And, of course, trying to explain the results, in particular those which stand out. This test? Exactly the opposite. He goes to long details to explain what disks do server and client have but not a word about NICs; "Gigabit Ethernet" is not nearly enough - and that hidden "Realtek PCIe GBE Family + Microsoft ISATAP" (???) isn't much better. One might think that for a "networking performance test" the NIC would be more important than the disk. No information about how were client and server connected. And, of course, not a word about netfilter configuration (which can affect the latency quite a log) and other networking settings; instead, we get detailed list of gcc flags used for build. It's proudly called "Network Benchmarks" or even "networking performance benchmarks" and yet, all the author did was running netperf TCP_RR and UDP_RR test (i.e. one very specific aspect of networking performance) with some parameters he didn't bother to tell us. From the results, it's apparent he didn't notice some tests show variance so high that the results should have been discarded and tests repeated. Not to mention that netperf has "-I" and "-i" parameters to make things even easier. Instead, he runs the same test with two different lengths. and presents the results separately. Why? The results should be the same within a margin of statistic error; if they are not, it should be a warning sign that something was wrong. Or should we perhaps believe the PC gets tired if you run the test for whole 6 minutes? One thing that really stands out is the UDP_RR test on Tumbleweed, in particular the 360s one. The variance itself should tell author the test went completely bonkers. Even if he didn't realize, he might have noticed even upper end of the indicated interval is still way below the results of the 60s test. Neither stopped him from publishing such completely unreliable results. Out of curiosity, I quickly ran netperf UDP_RR between my two machines, one running 42.3 with 4.15.8 Kernel:stable kernel (i.e. essentially Tumbleweed), the other Tumbleweed with 4.16-rc4 kernel from Kernel:HEAD. The hardware is definitely worse than Mr. Larabel's and NICs aren't anything special either (on-board Realtek 8168evl and common consumer grade Intel (82541GI)). Both machines are running a KDE desktop (I only wanted to get some idea about the results) so I used "-I 99 -i 20,5" to make sure the results are not completely random. The result I got was... wait for it... 10359.48. Even with 1400 bytes of request/response size, I still get 3257.53. As the highest result in the Phoronix article is 918.92, I feel Mr. Larabel owes us some information about what he was actually testing and how. (Honestly, over a millisecond per roundtrip on gigabit ethernet? That should give anyone a hint they are doing something wrong.) I just hope the next article isn't going to present TCP_STREAM results (measured on gigabit ethernet). That would be even more ridiculous. Michal Kubeček -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org