On 2016-04-15 16:41, Xen wrote:
Carlos E. R. schreef op 15-04-16 03:26:
I do not think the state of the mounted filesystem is registered on disk itself, but I could be wrong. That would be a very weird way to do things. In a hibernated instance, nothing should change to the disk except the files that are open, and of course the internal representation.
Well, there is the "dirty" flag, and the journal. Mounting a filesystem "dirty" is normally refused. It has to run fsck first. Depending on case and filesystem type, journal is erased or applied, then then system consistency is checked and corrected. Data on files could still be wrong. When the hibernated system returns, it finds a clean filesystem, but treats it as if it was unchanged from when he went to sleep... it is disastrous.
But I do think that if the running system expects the currently mounted partitions -- to have exclusive ownership over it.
Absolutely.
We once had an issue with that relating to XFS. A regular filesystem driver does not consider changes being done by something else. But networked filesystems might. Anyway.
No, it is the same thing. Only the local kernel can write. A networked mount is similar to multitasking: it is the local kernel who writes, but orders can come from many users and processes, many of them over the network.
I think you are correct in saying that if a hibernated system resumes on a filesystem that no longer agrees with its internal representation, things might get messed up badly.
I know that I'm correct, because I have done this and had to clean up the disaster. :-} And believe me, it is an horrible disaster. The kind that needs format and restore from backup. Maybe all partitions.
Provided you share filesystems.
Actually, you only need to "list" the filesystem in fstab. Unless there is a "0 0" at the end, the init system will automatically fsck all listed entries, regardless of the need to mount them at the time. And as non listed entries can be automatically managed by systemd, I'm not sure what the situation is currently about "non listed in fstab filesystem" regarding fsck on boot.
But this is the biggest concern because most dual boot systems would.
That implies shared filesystems would need to get unmounted and this is not a viable solution.
Correct.
Don't warn beforehand, just warn afterhand. Make one swap the default choice and then when you proceed to install, say
"Note that you now have two systems running on one swap. When hibernating, make sure not to boot the other system because it will overwrite the hibernate file. Even if you have two swaps, you cannot hibernate one and boot the other if you have any shared filesystems."
Something like that, yes.
Then if a user has chosen to make another swap, you might still say
"Note that having two swaps in principle allows you to boot one system while the other is hibernated. However, this is severely risky if you have any shared filesystems and they are mounted prior to hibernating. Do not access filesystems mounted by the other OS."
Right.
PS. if you have one swap and you mount the other system by accident, that means the hibernate file gets overwritten (?) and you are safe from unwanted side effects from dual-mounted filesystems.
Yes.
You will just have a system that was not "properly" shut down, and that will be all. So I feel one swap is even safer than two swaps.
However... The second system will probably look at the swap and see it has an hibernation image. It may attempt to load it, maybe see the signature does not match the new kernel, perhaps abort. If it doesn't abort, but just disable that swap, disaster may arise. It may erase the image and continue, and then it will see some dirty filesystems it will try to correct. IMHO, it should abort and ask. Maybe the user recognizes the situation and can attempt to boot the correct system in order to close it properly, then retry the second system. Thus, a single swap is way safer. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)