On 05/03/2015 08:22 PM, Ted Byers wrote:
First question is, what is the likelihood that there is physical damage to the SSD?
Less likelihood than damage to a rotating disk in a sudden power-out event. I've had even modern drives die that way, head crash on the outer tracks the 'system' tracks where the system modules are kept. These, I'm told, contain microcode patches, and the 'alternate' sectors. See http://www.pcguide.com/ref/hdd/geom/format_Defect.htm Sidebar: I don't know why these are on the outer tracks; that's an area more susceptible to head crash damage. Ever since the BFFS[1][2][3] of the early 1980s file system have replicated the superblock information and/or moved it away from the critical area at the outer tracks of the disk. Since the drive electronics present a linear array of locks to the os, regardless of how much remapping gets done to avoid 'shipped defects' by the use of the defect mapping microcode and alternate sectors, it really doesn't matter where all this lives. Putting it all at the beginning of the disk is, in my mind, a dumb idea. There's no reason the "cylinder group" method should not be applied here too. However, as Drucker pointed out, the last buggy whip manufacturer must have been *very* efficient[4], we might expect rotating rust to become very efficient and sensible before giving way to SSD. We might also expect file systems to be come very logical and efficient before in turn giving way. See <quote src="http://www.zdnet.com/article/why-ssds-are-obsolete/"> SSDs were built because there are billions of SATA and SAS disk ports available. Filling some of those ports with SSDs promised to be quite profitable - a promise fulfilled in the last 5 years. But now that non-volatile memory technology - flash today, plus RRAM tomorrow - has been widely accepted, it is time to build systems that use flash directly instead of through our antique storage stacks. The various efforts to decrease latency - SATA 3, NVMe, and others - still add layers of software between our applications and our data, creating complexity and wasting CPU cycles. </quote> Essentially, SSDs are the ultimate DASD - Direct access Storage Device. 64 bits of address space is ... What, an exabyte? And it *should* be _D*I*R*E*C*T*L*Y_ addressable. A lot of the metadata and indexing on our drives is there ebcuase tehy are drives, serially addressable. With true DASD, direct addressing fo the SSD, this file system overhead ... Well we still need indexing and allocation management, but the very nature of file systems change with direct addressing. Forcing a model that works for serial addressing rotating media onto a directly addressable storage makes no sense. The layering of indirection just slows things down. And adds complexity. That needs to be swept away. But so long as we have the SSDs connected serially via SCSI-like interface, that's what we're stuck with, even though it throws away better they 50% of the advantage of semiconductor storage. Ironically, it is the outdated Newton that showed what could be done with databases, OO-databases- as a directly memory mapped file system. The system removed many limits such as file types and the contents were not only directly addressable, they were associative (aka address by content rather than just location) and their semantics were extensible (e.g. Address of contact' did not need to replicate the contact information; noting was fixed format like tin ACT! Or many other database systems)
Second, what would be the steps to try to restore this machine? I know, in a worst case scenario, I can buy another SSD, install the latest and greatest release of OpenSuse, but first I need to see if I can fix this. I really do not want to lose the data I have on that drive. Can you point me to a few of your favourite web pages that deal with this specific problem?
You have another machine? Try it there. See if it can be addresses as a device, somehow, anyhow before worrying about content, Flip the RO switch if needs be. While there is the SUSE rescue CD there are many other LiveCDs that are good for diagnosis and debugging. See http://livecdlist.com/purpose/forensics/ [1] http://en.wikipedia.org/wiki/Unix_File_System in particular <quote> Each cylinder group has the following components: A backup copy of the superblock A cylinder group header, with statistics, free lists, etc., about this cylinder group, similar to those in the superblock A number of inodes, each containing file attributes A number of data blocks </quote> Note in particular that the superblock is replicated. [2] This http://pages.cs.wisc.edu/~remzi/OSTEP/file-ffs.pdf is worth critics of BtrFS reading for historic context. Although the BFFS was a lot faster than the V6/V7 FS it still showed up the problems with fixed allocation regions, a problem that the ext[234] file systems have not avoided. [3] http://docs.freebsd.org/44doc/smm/05.fastfs/paper.pdf This in interesting in that it also discusses wasted space a function of block size to justify fragmenting a block and using it to contain parts of more than one file. Many files, scripts are very short, less than 512 bytes. du' is not very good at reporting here; a du of /etc/hostname says its 4K when its less that 50 bytes. You need to use the 'apparent size' option. Try that on the /etc directory and see how much 'packing' could be done. Here's a crude and nasty one liner: find . -type f -size -128c -print0|xargs -0 du -b \{\} | Sort -rn [4] before giving up the buggy whip business and seguing into leather items for the BSDM trade. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org