On Wednesday 2013-09-04 03:37, Linda Walsh wrote:
dbf = gdbm_open ("trial.gdbm", 0, GDBM_NEWDB, 0644, 0); if (!dbf) { printf("%s, errno = %d, gdbm_errno = %d\n", "Fatal error opening trial.gdbm", errno, gdbm_errno);
Fatal error opening trial.gdbm, errno = 0, gdbm_errno = 2
(2 = GDBM_BLOCK_SIZE_ERROR)
If it is broken, that would be bad, since perl uses the above in its test suite -- and on my system, I've been getting breakages in building (and testing) perl for over a year...
Your system is known to be broken in various strange ways every now and then.
Has it? Notice I haven't posted much in the way of
Jan Engelhardt wrote: problems for .. ~3+ months now. System boots in ~25 seconds directly from disk, just like it has been able to do for the last ~15 years.
Because perl in factory is built using a clean well-known state. Every time.
Which means it is *untested* in the real world (Example follows). If the extent of build & test is in a single-config, sterile environment, how can show it will work on any environment that is different than the 1 config used for build & test -- there is no proof or credibility that a SuSE system will work on a given SuSE installed system. This a perfect example. GDBM appears to be broken by someone from BSD assuming that the "st_blksize" parameter of "stat", is the "Default block size" and will be a *power* (not, just multiple) of _2_. This isn't always true on linux or POSIX though is likely to be true in a sterile build+test environment. If they can't find the struct member, they use a fixed 1024 as the size of the blocks returned in "st_blocks" blkcnt_t st_blocks; /* number of 512B blocks allocated */ So the code defaults to using the wrong blksize on linux. But the usage of blksize is faulty & incorrect, as well. On POSIX (and linux), that blksize is the **preferred I/O size**. Which means it gets set based on the block-device and probably the filesystem. A RAID uses stripe-size x width (#data stripes) as the "blksize" in the stat call. So a RAID with 64k stripes & 12 data disks would have a stripe size of 768k. That stripsize is the optimal I/O size (and it isn't divisible by 2). One would assume SuSE hasn't dropped support for RAID filesystems, but it appears anytime the strip-size*width is not a **power** of 2, gdbm will fail -- as well as perl tests. But they are unlikely to fail on artificially constructed build machine where it is unlikely one would simulate a raid for build & test of perl. But that is exactly the type of Cross-testing SuSE needs to do, but no longer does. That's effectively creating a less useful release each time it covers up another bug like this. As far as I can tell, This appears to be a bug that goes back a ways and is entirely based on people not using a "build-system" configured-machine for their production machine. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org