Linda Walsh wrote:
The Saturday 2007-12-22 at 16:06 +0100, Anders Johansson wrote:
The real solution here is to find and fix the bug that causes beagle to allocate so much memory. It doesn't happen on all systems. ---. Without question, this is the best solution.
Anders Johansson wrote:
I wouldn't say "probably". It shouldn't be par for the course for an application to not check return values from memory allocation functions
Aaron Kulkis wrote:
As I said earlier... the whole thing is poorly written.
And you say this based on???
It's performance. It's SUPPOSED to be an unobtrusive background process, but can cripple a high-end machine through extreme resource-hogging behavior. Unfriendly behavior is the very definition of poor code.
Do you have _any_ expertise in the source?
Not at all, but that's not even the issue. We're not talking about "there's a faster way to sort this data" sorts of tweaks...we're talking about code which is widely known to effectively cripple any system it runs on, due to nothing more than the files it's handling, and how many there are.
(if you do, sorry, but your attitude is not productive). It _appears_ you know nothing about how it is written but are basing your opinion on its behavior in certain configurations.
I'm basing my evaluation on how it performs, and its impact on system performance....which is the ultimate standard of judging any code as to whether it is to be judged "good enough" or not.
This is why I made a comment about *not* using "swap" on a system -- if you are using swap on any regular basis (my threshhold is using swap, *anytime*, during 'normal' day-to-day usage), you are running applications that are "too big" for for your machine. Now whether this is due to poor application memory usage OR is due to poor-planning on the part of the system owner depends, in large part, on promises or expectations set by the developer or owner of the program.
I've got 2 GB of DDR2 on this machine, and 3 GB of swap, on a Centrino Core Duo running at 800 MHz with two internal 100 GB SATA drives, of which 140 GB is used by my SuSE installation. And a running beagle process makes this machine absolutely unusable....even without a GUI.
Certainly, if I am running on a system with 128M of memory with a 650MHz mobile CPU, and load a full "suse 10.x" desktop with full features, I am asking that I be "shot in the foot". If a release (like suse10.2, or windows 98) says it will run best with 1GB-mem + a 1GHz processor and my machine has 2GB+a 2GHz processor and the release runs like a "dog" -- then I'd say it is the fault of the release packager (they made the choice of what packages to include 'by default').
Certainly if the *end user* chooses to run more applications than their computer can comfortably fit in memory, how can the application developer account for this.
Beagle will grow and grow and grow until it uses all available swap. It appears that the only way to satisfy beagle's appetite appears to be to have enough memory to load up all of your home directory tree into it. But I don't know of any motherboard sold for under US $10,000 that can accomodate 60 GB of ram.
Beagle should be scrapped and started over from the ground up, starting with the design assumption that it is to behave as an unobtrusive background process, not the current one which can take over the whole system with a "feed me" attitude as if the whole purpose for a computer and its data to even exist is to provide something for a beagle process to index.
Do you have documentation or direct knowledge of what the "design goals" were? If not, how do you know it wasn't designed that way?
I'm saying that the design goals either were not met, or they were utterly inappropriate.
Something the beagle developers cannot know is how their application will be installed by release packagers. One example of an outstanding 'bug' (or feature depending on interpretation) that can affect beagle performance is how it is run by 'cron.daily'. From my own experience, under 9.3, the default is to run cron.daily 24 hours after it last ran -- but if something delays it running "overnight" (like the machine being off or suspended) it will run within 15 minutes of the machine becoming active. IT WON'T WAIT until the 'middle of the night', as you might want. This has nothing to do with beagle or its developers.
I'm likely to be using this computer at all hours of the day and night... I wake up, get an idea, do something, and then go back to sleep...
Ideally, the beagle indexing process would run once (either at night, or immediately if needed), and then be able to monitor the filesystems and directories for changes using the "fam" (famd) package.
fam is another thing I've banished from my installations, for similar reasons. Maybe a good idea but it too suffers from poor implementation.
The "fam" function (and as extended for directories and/or devices) monitors when any change is done to its monitored file-system objects, then calls "listening" programs to process the new or changed objects as they are changed on disk. Ideally, you would then need no 'batch' updating, but such would be done in "bits" throughout the day as monitored files are changed.
That being said, if a system doesn't have the OS support or resources needed to run 'famd' without without degradation, the system will still be painful to use (shorthand: "be unusable").
I'm a VERY strong advocate of loading up on memory, even if it means cutting into the CPU budget (and losing a few MHz), because page-faults to/from swap are VERY expensive in terms of performance...
Be careful about "global generalization" about a product being bad, though, though, just because it doesn't run well in a particular situation.
While I don't doubt that beagle does run acceptably in some instances, I've never seen it do so.
Linda
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org