Mailinglist Archive: opensuse-factory (602 mails)

< Previous Next >
Re: [opensuse-factory] Re: [PLEASE SPEAK UP] Disabling legacy file systems by default?
On 2/4/19 3:03 PM, Jim E Bonfiglio wrote:
Hi Simon- as far as I'm aware the currently insecure file system
implementations you are seeking to disable also possess bugs and known
exploits. While my proposal would not necessarily reduce bugs in sum
total, it ought to eliminate the necessity to concern oneself with
insecure file systems which possess bugs and known exploits.

By "virtualizing" or "extrapolating" the file system subsystem, these
buggy and known exploitable file systems (pretty much all of them)
could be contained rather than be removed and/or patched. This may
provide more flexibility to users of (open)SUSE, or, the general Linux
community at large should this be implemented upstream as previously

I hope this email provides clarity.

As stated previously, this is the wrong list for such discussion.
linux-fsdevel is the place for linux-wide file system developement, not
a single distro's rolling release topic list.

What you suggest is to turn Linux into a microkernel architecture. It's
been tried, specifically in the storage domain. Here's one recent

From the paper:
We concluded that, while it is theoretically possible to isolate
unmodified code, doing so in a way that preserves strong, byte level
coherence leads to too much performance overhead and engineering
complexity. Components in a shared memory program are interwoven with
call patterns and data flows that are too complex to break apart without
modification. We also concluded that it is difficult for domains to be

IOW, "a cool project but the performance sucks."

That reads pretty much like every analysis of every attempt to convert a
monokernel to a microkernel architecture.

The other "big" alternative is to create a library that allows any file
system to be compiled as a fuse module.

In either case, if it's a solution you want, you will either need to
write it yourself or find someone willing to accept payment from you to
do it. To suggest that others perform the work to implement your ideas
is considered especially rude in a community setting.

The approach taken by the blacklist is a low-cost, low-effort way to
reduce the attack surface for the vast majority of users. As has been
stated elsewhere, the high-use file systems are well maintained and
security vulnerabilities are fixed as quickly as possible.


On Mon, 2019-02-04 at 14:55 +1030, Simon Lees wrote:

On 02/02/2019 04:12, Jim E Bonfiglio wrote:
Hi Simon- I would challenge you to examine the feasibility of such
containment across the entirety of the storage subsystem as this
to be a significant value add to SLES customers, not to mention
openSUSE users. As far as I'm aware it is not necessary to disable
features of a subsystem to eliminate its attack surface.

Per my previous reply to Martin Wilck, I would not complain should
file systems be "made secure" however I don't think that is
as all file systems have already had or willl very likely have in
future a security vulnerability discovered such that work becomes
necessary to correct the vulnerability. In lieu of addressing each
insecure file systems through correction or disablement, the attack
surface could be eliminated instead vis-à-vis some sort of
layer between the subsystem and its connecting components.

In lieu of a virtualized layer between the subsystem and its
components, I suppose disabling the file systems would eliminate
current risk, but does not address future risk to any sort of CVE
bulletin or other discovery regarding file system vulnerability. I
strongly recommend addressing the root cause of this attack surface
rather than reducing the size of the surface itself.

Best, Jim

Well that is also well outside my scope, Given SUSE's upstream first
policy such a set of changes would have to be developed in the
kernel before we adopted them. In SUSE / openSUSE, even if someone is
willing to completely revamp the storage subsystem and do it in such
way that doesn't cause a significant performance hit the process will
still take at least a year likely more before it reaches our kernels
which means we need to do something in the mean time of which
uncommonly used and not well maintained filesystems makes sense.

Even if it was re written it would still have bugs as any
complex software does and eventually someone would find a way to

Jeff Mahoney

< Previous Next >
List Navigation
This Thread
Follow Ups