On 29. 11. 22, 8:24, Richard Biener wrote:
On Tue, 29 Nov 2022, Jiri Slaby wrote:
On 28. 11. 22, 20:10, Michal Such?nek wrote:
Yes, sorry - I was meant to say that the -v3 improvement is more obvious because AVX widens the vectors. -v2 only is going to improve more specific vectorizable workloads while -v3 is going to improve most vectorizable workloads.
As in compilling the whole distro for -v2 does not provide appreciable benefits because most of the specific workloads that benefit a lot from sse already use it in one way or another, and compiling general purpose code with it gives mixed results (as far as the few banchmarks provided show).
So can someone explain me at last why it was decided Leap/ALP adopts this then? It makes no sense to me. Neither performance-wise (there is almost no diff), nor maintenance-wise (there is almost no diff).
It was first decided to go with -v3 but then backtracked to -v2 for a variety of FUD & technical reasons. That there is Factory-First and the close ALP/Leap relationship didn't help of course.
Yes, in this particular case, the policy IMO sucks more than helps. So do an exception instead of rebuilding the whole TW because of some rigid rule?
Anyway, I kind-of agree that -v2 doesn't make much sense on its own but at least some advancement over x86-64 makes sense (for atomics and other minor details), but since there's no -v1.5 there's not much choice here (and yes, -v2 is kind of arbirtrary).
Croak on < "1.5" during runtime in glibc then and die. Still no point to pick arbitrary vX, provided there is still a heap of v2 out there. What required cpu features do you have in your mind regarding the atomics and other minor details? We might need to require those instead.
I think that even defaulting to -v2 is going to help some ISVs (since RHEL 9 now requires x86-64-v2), both in compatibility and performance.
I don't understand the compatibility point. With v1 we should be compatible "more", or not? Citation needed regarding performance. thanks, -- js suse labs