Mailinglist Archive: opensuse (1695 mails)

< Previous Next >
Re: [opensuse] 10.2 no RAID to 11.0 RAID 1

----- Original Message -----
From: "Randall R Schulz" <rschulz@xxxxxxxxx>
To: <opensuse@xxxxxxxxxxxx>
Sent: Monday, September 22, 2008 8:45 PM
Subject: Re: [opensuse] 10.2 no RAID to 11.0 RAID 1

On Monday 22 September 2008 16:21, Carlos E. R. wrote:
The Monday 2008-09-22 at 19:01 -0400, Andrew Joakimsen wrote:
There is a howto, and it is included with the distro.

On what page of the manual? I never saw it....

The howtos are not part of the manuals, they are independent, and
often by different authors:

cer@nimrodel:~> ls /usr/share/doc/howto/en/txt/ | grep -i raid

The last one is the one I mean. You have to install the howtos first
(txt or html versions), or read them directly in internet (TLDP, I

And do we still cling to the notion that local content index and search
is unnecessary??

And if you do, wait until you collect 5 GB of PDF, gzip-compressed
PostScript and HTML documents, as I have...

The argument is that the way the math works out for me, with indexing, I suffer
slowness 100% of the time in order to get a speed up 1% of the time.

That math is backwards to me, and it's far worse than merely 100 to 1 in

I would rather have my machine as fast as possible 100% of the time, and have
to go looking for something the hard way 1% of the time.

The rest of the time, ordinary organization will let me find a program or
document immediately without having to rely on an indexing and search system.

If I have a library of documents or other too-large-for-that mass of data, then
of course I place them in a library type application or database which has
indexing and searching, but it rarely has to rebuild indexes and search
constantly for random changes in the data. Whenever any data is added or
changed, any relevant indexes are surgically updated the same way the database
itself is with the payload data. By contrast a desktop indexer has to
constantly search for all the random changes I may make to the directories
within it's scope.

Reiser4 with built in indexing (or via module) may be the answer for that,
allowing indexing without constant searching, compiling, colating and index
rebuilding. The fs is in essence a db engine and it can maintain indexes the
way db engines do.

Finally, even with desktop indexing, /usr/share/doc is not within anyones
desktop or home directory, so presumably it wouldn't be indexed by these things
_anyways_. Other possible scenarios, if these things are configured to index
the entire filesystem / all filesystems, then that is automatically horrendous
and wrong even if I could be convinced to tolerate them in a home directory.
A user or a sysadmin might put anything anywhere and it's patently stupid to
allow some indexer to try to search through gigs of irrelevant data.
repeatedly. Not to mention merely accessing a file may screw up some other
process that watches that files access time. If the indexers only search in a
specific list of directories, whether all in their home or including some
elsewhere like /usr/share, well if the user can be expected to administer the
list of indexed directories, then they already know enough not even to need
them. So the benefit is coming down to something like, "Well the user may not
know the directories where docs are, but the package or distribution developers
have preconfigured the list into the indexer", so by default we should all have
our pc's run slow and our drives die sooner so that some user doesn't have to
know a short list of likely places to look for docs? That is insane.

Brian K. White brian@xxxxxxxxx
filePro BBx Linux SCO FreeBSD #callahans Satriani Filk!

To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
For additional commands, e-mail: opensuse+help@xxxxxxxxxxxx

< Previous Next >