I'm installing 9.3 on an old Sony Vaio machine that was formerly a Windows box using the network install CD. When I get to the point where it starts the graphical installer, it tells me that I don't have enough physical memory (the machine has 128mb) and I need to activate a swap partition. So I aborted the install, put in my Knoppix disk, deleted the Windows partition, and put in a 512mb swap partition at the beginning of the disk. Then I restarted the SuSE installation and accepted the default partitioning scheme reccomended by SuSE as follows: -Delete partition /dev/hda1 509.9 MB (Linux Swap) -Create boot partition 64.4 MB (/dev/hda1 with ext2) -Create swap partition 901.1 MB on /dev/hda2 -Create root partition 73.6 GB (/dev/hda3 with reiser) (note: this is an 80gb Maxtor HDD, if that makes any difference) So when I told it to go ahead with this install, it errored out again because the swap partitions interfered with each other. So my question is this: SuSE wants a 900mb swap partition, but will it run OK with the 509mb swap partition? Or should I put in my Knoppix disk and resize the swap space to 900mb before proceeding with the install? Thanks -Nick __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
On Sunday 17 July 2005 05:55 pm, Nick Jones wrote:
So when I told it to go ahead with this install, it errored out again because the swap partitions interfered with each other. So my question is this: SuSE wants a 900mb swap partition, but will it run OK with the 509mb swap partition? Or should I put in my Knoppix disk and resize the swap space to 900mb before proceeding with the install?
Why are you farting around with Knoppix? Let SuSE do it all. It doesn't need any help. Nuke it all, and let SuSE set it up the way it wants. Oh, and hurry over to crucial.com and get another stick of memory. You will be MUCH happier with at least 512. -- _____________________________________ John Andersen
--- John Andersen <jsa@pen.homeip.net> wrote:
So when I told it to go ahead with this install, it errored out again because the swap partitions interfered with each other. So my question is
On Sunday 17 July 2005 05:55 pm, Nick Jones wrote: this:
SuSE wants a 900mb swap partition, but will it run OK with the 509mb swap partition? Or should I put in my Knoppix disk and resize the swap space to 900mb before proceeding with the install?
Why are you farting around with Knoppix?
Let SuSE do it all. It doesn't need any help.
Nuke it all, and let SuSE set it up the way it wants. Oh, and hurry over to crucial.com and get another stick of memory. You will be MUCH happier with at least 512.
-- _____________________________________ John Andersen
____________________________________________________ Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs
Nick Jones wrote:
So when I told it to go ahead with this install, it errored out again because the swap partitions interfered with each other. So my question is this: SuSE wants a 900mb swap partition, but will it run OK with the 509mb swap partition?
The direct answer is yes. But 900Mb swap for a machine with 128Mb is overkill. There is a rule of thumb, but I can't remember what it is :-( - in your case, I would probably opt for a 256M swap partition. You can set up your partitions using fdisk (with Knoppix or the SuSE rescue system), then start the install again. /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - managed anti-spam and anti-virus solution. Sign up for your free 30-day trial now!
On Monday 18 July 2005 13:20, Per Jessen wrote:
But 900Mb swap for a machine with 128Mb is overkill. There is a rule of thumb, but I can't remember what it is
The rule of thumb is make swap twice the size of actual ram, for example if you have 128 MB RAM make your swap 256. In my case I stopped the doubling rule once I hit 512. In other words with 512 RAM and above I am using 512 swap. It seems to work fine. However numerous people have told me they are still doubling with high amounts of RAM, for example 1 GB RAM and 2 GB swap. These people also seem to be happy with how their systems are working. Bryan ******************************************************** Powered by SuSE Linux 9.2 Professional KDE 3.3.0 KMail 1.7.1 This is a Microsoft-free computer Bryan S. Tyson bryantyson@earthlink.net ********************************************************
On Monday 18 July 2005 2:02 pm, Bryan Tyson wrote:
On Monday 18 July 2005 13:20, Per Jessen wrote:
But 900Mb swap for a machine with 128Mb is overkill. There is a rule of thumb, but I can't remember what it is
The rule of thumb is make swap twice the size of actual ram, for example if you have 128 MB RAM make your swap 256.
In my case I stopped the doubling rule once I hit 512. In other words with 512 RAM and above I am using 512 swap. It seems to work fine. However numerous people have told me they are still doubling with high amounts of RAM, for example 1 GB RAM and 2 GB swap. These people also seem to be happy with how their systems are working. For the most part, your strategy seems to hold. Generally, 512M should be sufficient except in some very extreme cases. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
On Monday, July 18, 2005 @ 10:02 AM, Bryan Tyson wrote:
On Monday 18 July 2005 13:20, Per Jessen wrote:
But 900Mb swap for a machine with 128Mb is overkill. There is a rule of thumb, but I can't remember what it is
The rule of thumb is make swap twice the size of actual ram, for example if you have 128 MB RAM make your swap 256.
In my case I stopped the doubling rule once I hit 512. In other words with 512 RAM and above I am using 512 swap. It seems to work fine. However numerous people have told me they are still doubling with high amounts of RAM, for example 1 GB RAM and 2 GB swap. These people also seem to be happy with how their systems are working.
Bryan
******************************************************** Powered by SuSE Linux 9.2 Professional KDE 3.3.0 KMail 1.7.1 This is a Microsoft-free computer
Bryan S. Tyson bryantyson@earthlink.net ********************************************************
SuSE defaults to 1.5 RAM. I. e., if you have 1G of RAM on your machine, the default configuration that SuSE will give you will have 1.5G of RAM. I have also read elsewhere that this is the proper ratio. But, maybe that is a low end number and 2 times RAM is really better (?). Greg Wallace
On Tuesday 19 July 2005 01:54, Greg Wallace wrote:
SuSE defaults to 1.5 RAM. I. e., if you have 1G of RAM on your machine, the default configuration that SuSE will give you will have 1.5G of RAM. I have also read elsewhere that this is the proper ratio. But, maybe that is a low end number and 2 times RAM is really better (?).
Note that a hard drive is RAM. Twice your RAM would likely be quite a few gigabytes
On Monday, July 18, 2005 @ 4:03 PM, Anders Johansson wrote:
On Tuesday 19 July 2005 01:54, Greg Wallace wrote:
SuSE defaults to 1.5 RAM. I. e., if you have 1G of RAM on your machine, the default configuration that SuSE will give you will have 1.5G of RAM. I have also read elsewhere that this is the proper ratio. But, maybe that is a low end number and 2 times RAM is really better (?).
Note that a hard drive is RAM. Twice your RAM would likely be quite a few gigabytes
Ok. Maybe I'm too old school. I was using RAM to refer to internal memory chips. Greg W
On Tuesday 19 July 2005 03:06, Greg Wallace wrote:
Note that a hard drive is RAM. Twice your RAM would likely be quite a few gigabytes
Ok. Maybe I'm too old school. I was using RAM to refer to internal memory chips.
This has nothing to do with age. RAM stands for (and has always stood for) Random Access Memory. It refers to how you access it. The opposite is Sequential memory. RAM simply means that you can access any part without first having to read through all the bits before it first. Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
On Monday, July 18, 2005 @ 5:17 PM, Anders Johansson wrote:
On Tuesday 19 July 2005 03:06, Greg Wallace wrote:
Note that a hard drive is RAM. Twice your RAM would likely be quite a few gigabytes
Ok. Maybe I'm too old school. I was using RAM to refer to internal memory chips.
This has nothing to do with age. RAM stands for (and has always stood for) Random Access Memory. It refers to how you access it. The opposite is Sequential memory.
RAM simply means that you can access any part without first having to read through all the bits before it first.
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
Right. I'm just used to calling internal memory RAM. Why? I don't know. So what is the common term you use when you're talking about the chips in your machine as opposed to, say, a disk drive? As you say they're both RAM. They're also both memory (one short term, one long term). Greg Wallace
On Tuesday 19 July 2005 03:34, Greg Wallace wrote:
Right. I'm just used to calling internal memory RAM. Why? I don't know. So what is the common term you use when you're talking about the chips in your machine as opposed to, say, a disk drive? As you say they're both RAM. They're also both memory (one short term, one long term).
Sadly I sometimes fall into the same misuse. I try to keep myself to using terms like 'internal memory' (slightly inaccurate, but not as inaccurate as RAM) or 'main memory' (perhaps the best term to use) Terminology is important. And this is probably the most misused term of all. Looking at the output of 'dict ram', even the dictionaries get it wrong. WordNet worst of all, they have RAM to be synonymous with read/write memory, which is so far up the wall I want ask them to paint the ceiling while they're up there. ROM can be (and usually is) RAM
Right. I'm just used to calling internal memory RAM. Why? I don't know. So what is the common term you use when you're talking about the chips in your machine as opposed to, say, a disk drive? As you say they're both RAM. They're also both memory (one short term, one long term). RAM has always been used to refer to internal memory. Disk is not random access, but is not fully sequential. While a program can address just about any byte on a disk drive, generally, one must position the heads at the
On Monday 18 July 2005 9:34 pm, Greg Wallace wrote: proper cylinder, wait for the sector containing your data to pass under the heads, and then read that sector into internal memory, then extract your data from that sector. It is also orders of magnitude slower than RAM storage. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
RAM has always been used to refer to internal memory. Disk is not random access, but is not fully sequential. While a program can address just about any byte on a disk drive, generally, one must position the heads at the proper cylinder, wait for the sector containing your data to pass under the heads, and then read that sector into internal memory, then extract your data from that sector. It is also orders of magnitude slower than RAM storage. Correcting myself a bit since I go back to the user of core memory. I once participated in writing a point of sale system for a fast food restaurant that was based on a DEC PDP-8 with 4K (12-bit words) of core memory. We had no external storage. All data was either transmitted to the home office each night or in the case of a failure, recorded by the manager and snailed or called in. We would upload any program changes as part of
On Tuesday 19 July 2005 8:00 am, Jerry Feldman wrote: the nightly data exchange. BTW: Baud rate (nominally 1200bps) was actually controlled by timing loops in the program since there were no uarts- Send start bit- loop-for each next 8 bits send a bit/loop- send stop bit. If the system crashed, we had to send a service person in with the program on punched paper tape and a paper tape reader. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
Jerry Feldman wrote:
Right. I'm just used to calling internal memory RAM. Why? I don't know. So what is the common term you use when you're talking about the chips in your machine as opposed to, say, a disk drive? As you say they're both RAM. They're also both memory (one short term, one long term). RAM has always been used to refer to internal memory. Disk is not random access, but is not fully sequential. While a program can address just about any byte on a disk drive, generally, one must position the heads at the
On Monday 18 July 2005 9:34 pm, Greg Wallace wrote: proper cylinder, wait for the sector containing your data to pass under the heads, and then read that sector into internal memory, then extract your data from that sector. It is also orders of magnitude slower than RAM storage.
In some early computers, a drum or disk was the main memory. There was no core or any other type of "internal" memory.
Greg Wallace wrote:
Right. I'm just used to calling internal memory RAM. Why?
Why not? RAM in internal memory is built up of RAM chips - DRAM or SRAM (or any of the other varieties). Calling a harddisk RAM is just not right in my book. The access only _appears_ to be random, when in truth it is sequential. The in-tray on my desk is also RAM (the access can be very random at times :-), but I don't call it RAM. /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - managed anti-spam and anti-virus solution. Sign up for your free 30-day trial now!
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
For many years, computers used non-volatile internal memory. It was called "core" memory.
On Monday, July 18, 2005 @ 6:08 PM, James Knott wrote:
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
For many years, computers used non-volatile internal memory. It was called "core" memory.
Yep. I remember those days. I stay away from that term now because I think it sort of marks you as an "old timer". If you are in a conversation with a group of younger programmers and use that term, some of them have never heard it. But it is probably one of the more unambiguous terms you could use. If you say core, there's really no doubt about what you're referring to. Greg Wallace
Greg Wallace wrote:
On Monday, July 18, 2005 @ 6:08 PM, James Knott wrote:
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
For many years, computers used non-volatile internal memory. It was called "core" memory.
Yep. I remember those days. I stay away from that term now because I think it sort of marks you as an "old timer". If you are in a conversation with a group of younger programmers and use that term, some of them have never heard it. But it is probably one of the more unambiguous terms you could use. If you say core, there's really no doubt about what you're referring to.
And if Macintosh computers used it, it would be called Apple core. ;-) Incidentally, I have a core plane (16 K bits IIRC) from an old Collins 8401 computer. At one point in my career, I even repaired core memory boards.
Greg Wallace wrote:
On Monday, July 18, 2005 @ 6:08 PM, James Knott wrote:
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
For many years, computers used non-volatile internal memory. It was called "core" memory.
Yep. I remember those days. I stay away from that term now because I
On Tuesday, July 19, 2005 @ 1:54 AM, James Knott wrote: think
it sort of marks you as an "old timer". If you are in a conversation with a group of younger programmers and use that term, some of them have never heard it. But it is probably one of the more unambiguous terms you could use. If you say core, there's really no doubt about what you're referring to.
And if Macintosh computers used it, it would be called Apple core. ;-)
Incidentally, I have a core plane (16 K bits IIRC) from an old Collins 8401 computer. At one point in my career, I even repaired core memory boards.
That does go back in time. I started on an IBM 360. I believe the company had just upgraded from a 1410 (I believe that's the right model number). But the 360 is as old a machine as I worked on (except I think they had a 1410 machine at the University I went to and I punched out a few programs on cards when I was there). I have always worked on the application end, so I never really got hands-on into the hardware myself. Greg Wallace
Greg Wallace wrote:
I stay away from that term now because I think it sort of marks you as an "old timer". If you are in a conversation with a group of younger programmers and use that term, some of them have never heard it. But it is probably one of the more unambiguous terms you could use.
Well, on Linux the dumpfile is called core.xxxx, so calling it a core-dump is a pretty good name :-) /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - managed anti-spam and anti-virus solution. Sign up for your free 30-day trial now!
On Tuesday 19 July 2005 04:08, James Knott wrote:
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
For many years, computers used non-volatile internal memory. It was called "core" memory.
I didn't know that, that is interesting. What was that based on? I'm guessing it wasn't based on the temporary flow of electricity, the way modern memory sticks are
On Tuesday 19 July 2005 09:05 am, Anders Johansson wrote:
On Tuesday 19 July 2005 04:08, James Knott wrote:
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
For many years, computers used non-volatile internal memory. It was called "core" memory.
I didn't know that, that is interesting. What was that based on? I'm guessing it wasn't based on the temporary flow of electricity, the way modern memory sticks are
Core memory usually was little 'doughnuts' of magnetic material with wires running through the core. Depending on how the current flowed, the doughnut could be magnetized in one polarity or the other.... (0 or 1)
Bruce, On Tuesday 19 July 2005 06:14, Bruce Marshall wrote:
On Tuesday 19 July 2005 09:05 am, Anders Johansson wrote:
...
For many years, computers used non-volatile internal memory. It was called "core" memory.
I didn't know that, that is interesting. What was that based on? I'm guessing it wasn't based on the temporary flow of electricity, the way modern memory sticks are
Core memory usually was little 'doughnuts' of magnetic material with wires running through the core. Depending on how the current flowed, the doughnut could be magnetized in one polarity or the other.... (0 or 1)
And reading a bit erased it, necessitating a write-back after every read. The major reason the PDP-11 included modify addressing modes was to optimize the pattern of reading a value, computing a new value and writing that new value back to the same location. The Unibus memory addressing model cooperated with the instruction architecture in making this possible, with a distinct "read, but don't refresh" cycle. Also, DECtape was random access tape, in the sense you could write any block on the tape at any time and the drive would seek the the tape. This was made possible by subjecting the tape to a formatting operation that assigned distinct locations to each sector, just as formatting a disk drive does. All interesting things now mercifully consigned to museums! And yes, pretty much off-topic. Randall Schulz
On Tuesday 19 July 2005 9:28 am, Randall R Schulz wrote:
Also, DECtape was random access tape, in the sense you could write any block on the tape at any time and the drive would seek the the tape. This was made possible by subjecting the tape to a formatting operation that assigned distinct locations to each sector, just as formatting a disk drive does. Essentially, the formatting of a DECtape was to make the home position midway on the tape. It sets up an index, and essentially, the DECtape is an indexed-sequential file system. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
On Tuesday 19 July 2005 9:05 am, Anders Johansson wrote:
I didn't know that, that is interesting. What was that based on? I'm guessing it wasn't based on the temporary flow of electricity, the way modern memory sticks are Modern memory sticks are dynamic, and require power to maintain memory. Prior to about 1965, virtually all computers used core memory. Hence, when a program crashes we get the proverbial "core dump". -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
Jerry Feldman wrote:
On Tuesday 19 July 2005 9:05 am, Anders Johansson wrote:
I didn't know that, that is interesting. What was that based on? I'm guessing it wasn't based on the temporary flow of electricity, the way modern memory sticks are Modern memory sticks are dynamic, and require power to maintain memory. Prior to about 1965, virtually all computers used core memory. Hence, when a program crashes we get the proverbial "core dump".
"Dynamic" refers to the refreshing of the stored charge that represents the bit. There is also static memory, that doesn't have to be refreshed, but will also lose the data when power is removed.
Anders Johansson wrote:
On Tuesday 19 July 2005 04:08, James Knott wrote:
Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off) For many years, computers used non-volatile internal memory. It was called "core" memory.
I didn't know that, that is interesting. What was that based on? I'm guessing it wasn't based on the temporary flow of electricity, the way modern memory sticks are
It used tiny ferrite rings, which could be magnetized in either of two directions. The way they were read, was to erase a bit, and see if it changed polarity. Incidentally, this was referred to as "destructive read", in that you had to erase the data to read it. In core memory, there's the X & Y lines, to select the core you want to use, along with sense and inhibit lines. These two would run through all the cores in one bit plane. The when writing, the memory would try to set all the cores to one polarity, but the inhibit line, if enabled, would cancel that. Then on reading, the memory would try to force bit to the opposite polarity, and the sense line, would detect if the bit changed from one polarity to the other. Here's a brief description. http://www.science.uva.nl/faculteit/museum/CoreMemory.html
On Monday 18 July 2005 21:17, Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
You are correct but everyone knows when someone mentions ram and swap in the same paragraph they are talking about chips when they say ram and hard drive when they say swap. Bryan ******************************************************** Powered by SuSE Linux 9.2 Professional KDE 3.3.0 KMail 1.7.1 This is a Microsoft-free computer Bryan S. Tyson bryantyson@earthlink.net ********************************************************
On Tuesday 19 July 2005 14:51, Bryan Tyson wrote:
On Monday 18 July 2005 21:17, Anders Johansson wrote:
Technically, a hard disk is non-volatile RAM, while internal memory is volatile RAM (meaning it gets cleared when you turn the power off)
You are correct but everyone knows when someone mentions ram and swap in the same paragraph they are talking about chips when they say ram and hard drive when they say swap.
True. I was just striking a blow for good terminology. Too many people have forgotten what RAM actually means
Anders Johansson wrote:
This has nothing to do with age. RAM stands for (and has always stood for) Random Access Memory. It refers to how you access it. The opposite is Sequential memory. RAM simply means that you can access any part without first having to read through all the bits before it first.
Of course, a harddrive really works by sequential access. It may _look_ as if it's RAM, but if you want the bit that's diametrically opposite the r/w head, the r/w head will read through the sectors until it gets to what you want. In earlier days, many optimisation schemes depended on this behaviour - IBMs DFSORT for instance knew the rotational speeds of the various disk-models, and was able to optimise processing according to that. /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - managed anti-spam and anti-virus solution. Sign up for your free 30-day trial now!
How do I list all the soft links pointing to a specific file or dir /Bo
On 7/21/05, Bo Jacobsen <subs@systemhouse.dk> wrote:
How do I list all the soft links pointing to a specific file or dir
/Bo
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Hi Bo, e.g. the command "find" with the -samefile option could do this for you. It just looks for another link (if softlinks should be included use the -L option) and checks if any other file has the same inode. But keep in mind this could take some time as it has to check each inode of each file.. Example: ( to shorten the search time I specified a the /bin directory to search only ) you probably know, that the gzip command has some links to it, which, if the link is called instead causes a specific behaviour for gzip (gunzip, zcat). $>find -L /bin -samefile /bin/gzip /bin/gzip /bin/zcat /bin/gunzip hope this helps Markus
On Monday, July 18, 2005 @ 9:20 AM, Per Jessen wrote:
Nick Jones wrote:
So when I told it to go ahead with this install, it errored out again because the swap partitions interfered with each other. So my question is this: SuSE wants a 900mb swap partition, but will it run OK with the 509mb swap partition?
The direct answer is yes. But 900Mb swap for a machine with 128Mb is overkill. There is a rule of thumb, but I can't remember what it is :-( - in your case, I would probably opt for a 256M swap partition. You can set up your partitions using fdisk (with Knoppix or the SuSE rescue system), then start the install again.
/Per Jessen, Zürich
The rule of thumb is 1 1/2 times your ram. For example, for 1 G of ram, you should have a 1.5 G swap space. Greg W
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2005-07-18 at 19:20 +0200, Per Jessen wrote:
So when I told it to go ahead with this install, it errored out again because the swap partitions interfered with each other. So my question is this: SuSE wants a 900mb swap partition, but will it run OK with the 509mb swap partition?
The direct answer is yes. But 900Mb swap for a machine with 128Mb is overkill. There is a rule of thumb, but I can't remember what it is :-( - in your case, I would probably opt for a 256M swap partition.
That "rule of thumb" was invented for windows, it makes no sense in Linux. There is no "rule", simply use as much or as little as you need, regardless on how much RAM there is available. It depends on the programs that will be run and their memory requirement. I have systems with about 20 times more swap than ram... For example, if you want to use the suspend to disk feature of 9.3, a large swap is required, and 1 gig would be nice. So, 900Mb is not an overkill, not really. Maybe not needed, but maybe it is - depends on what apps the OP uses... I personally think 500 Mb could be scarce. And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-) - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC4AbEtTMYHG2NR9URAkD9AKCWgQhQxG4HygkMzxZQ3kVuTo2rzQCfbY/k hQcQjSlEeiVOKyJvlPrhk3E= =wS0+ -----END PGP SIGNATURE-----
Carlos E. R. wrote:
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
That's the case for modern computers, but back in the dark ages, some computers used a drum or disk for memory.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Thursday 2005-07-21 at 21:12 -0400, James Knott wrote:
Carlos E. R. wrote:
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
That's the case for modern computers, but back in the dark ages, some computers used a drum or disk for memory.
Which was directly addressable for that architecture ;-) Just think, how could you run a program to read a byte from the drum, if that program has to run in drum memory! Impossible, it has to be read directly, or otherwise, it needs other type of memory. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC4FGutTMYHG2NR9URAttsAJ9ZcgmsvWsM3FD04fewH2FRLuvYtQCfVnMg Xb1GlNZYKo8CHyAO+Z/pa2Y= =Sl2h -----END PGP SIGNATURE-----
Carlos E. R. wrote:
Just think, how could you run a program to read a byte from the drum, if that program has to run in drum memory! Impossible, it has to be read directly, or otherwise, it needs other type of memory.
Which is why there is a distinction between primary and secondary storage. You're playing incredibly loose with definitions here
Carlos E. R. wrote:
The Thursday 2005-07-21 at 21:12 -0400, James Knott wrote:
Carlos E. R. wrote:
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
That's the case for modern computers, but back in the dark ages, some computers used a drum or disk for memory.
Which was directly addressable for that architecture ;-)
Just think, how could you run a program to read a byte from the drum, if that program has to run in drum memory! Impossible, it has to be read directly, or otherwise, it needs other type of memory.
That's how it work. The computer directly addressed the locations on the drum. Many years ago, I used to maintain an ancient (it was older than I was) "computer" (it was called a computer, even though it wasn't programable and could only perform the one function), that was built with vacuum tubes and relays. It used a drum for memory and to access a location, it would specify the desired track and counters were used to locate the position on that track. There were no sectors or even files, just data stored at locations on the drum surface. One thing you have to bear in mind, was that the earliest computers, were little more than programable calculators.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Thursday 2005-07-21 at 22:21 -0400, James Knott wrote:
That's the case for modern computers, but back in the dark ages, some computers used a drum or disk for memory.
Which was directly addressable for that architecture ;-)
Just think, how could you run a program to read a byte from the drum, if that program has to run in drum memory! Impossible, it has to be read directly, or otherwise, it needs other type of memory.
That's how it work. The computer directly addressed the locations on the drum. Many years ago, I used to maintain an ancient (it was older than I was) "computer" (it was called a computer, even though it wasn't programable and could only perform the one function), that was built with vacuum tubes and relays. It used a drum for memory and to access a location, it would specify the desired track and counters were used to locate the position on that track. There were no sectors or even files, just data stored at locations on the drum surface. One thing you have to bear in mind, was that the earliest computers, were little more than programable calculators.
I thought so. Once I built a one bit adder with carry (in and out) with relays on a breadboard, just for fun. The click-clack was fascinating. But my fellow students didn't see the "fun" in it... dissapointing ;-) (My first "computer" was a programmable TI-57, an Apple completely out of my reach) - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC4smOtTMYHG2NR9URAuW2AJsHA498f1R33/bycy0J9tOKtpcVzQCeJpVo AwlNgFPXjZzw0a/YLSEk8IM= =pUlo -----END PGP SIGNATURE-----
On Thursday 21 July 2005 22:34, Carlos E. R. wrote:
And no, I do not accept naming HD memory as RAM memory,
By the way, if you're going to reply to something I wrote, reply to me when I write it.
for one thing: it is not directly addressable by the processor,
The definition of 'random access' does not include 'direct addressability without the use of a driver'
thus it is not even "memory".
That's just silly
On Thursday, July 21, 2005 @ 6:15 PM, Anders Johansson wrote:
On Thursday 21 July 2005 22:34, Carlos E. R. wrote:
And no, I do not accept naming HD memory as RAM memory,
By the way, if you're going to reply to something I wrote, reply to me when I write it.
for one thing: it is not directly addressable by the processor,
The definition of 'random access' does not include 'direct addressability without the use of a driver'
thus it is not even "memory".
That's just silly
http://en.wikipedia.org/wiki/Random_access_memory Greg Wallace
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Friday 2005-07-22 at 04:15 +0200, Anders Johansson wrote:
And no, I do not accept naming HD memory as RAM memory,
By the way, if you're going to reply to something I wrote, reply to me when I write it.
Sorry, it wasn't meant as an answer to you in particular, that's why that paragraph was indented; it was a "side thought" after reading the whole thread. I'll reformat the email to recover what I said exactly.
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
Notice the smiley, please. You quoted thus:
for one thing: it is not directly addressable by the processor,
The definition of 'random access' does not include 'direct addressability without the use of a driver'
thus it is not even "memory".
That's just silly
No, it is not. By "memory" we commonly understand "primary memory". When one goes to the computer shop and ask for more memory, nobody thinks you want to buy a bigger HD. The common usage of the unqualified term "memory" refers to main memory or primary storage, which has to be directly addressable, its one of its properties. Other uses of the word that are not so common need to be qualified or differentiated. Therefore, when we talk about "RAM" we are thinking of the memory chips used in computers - even if HD can be considered memory of some kind, it is not what we refer to as RAM. It's the same convention as when we talk of ROM, forgetting that ROM chips are also RAM. A misname, but generally accepted, and we all know what its being referred to. But this argument is completely byzantine and pointless. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC4tQstTMYHG2NR9URAhDrAJ90+pYhEWL3LiYcUD5cwheHmyF50wCgiyI8 9SID5FNh20N5k2qeYlphuHg= =KTOv -----END PGP SIGNATURE-----
On Saturday, July 23, 2005 @ 3:35 PM, Carlos Robinson wrote:
The Friday 2005-07-22 at 04:15 +0200, Anders Johansson wrote:
And no, I do not accept naming HD memory as RAM memory,
By the way, if you're going to reply to something I wrote, reply to me when I write it.
Sorry, it wasn't meant as an answer to you in particular, that's why that paragraph was indented; it was a "side thought" after reading the whole thread.
I'll reformat the email to recover what I said exactly.
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
Notice the smiley, please.
You quoted thus:
for one thing: it is not directly addressable by the processor,
The definition of 'random access' does not include 'direct addressability without the use of a driver'
thus it is not even "memory".
That's just silly
No, it is not.
By "memory" we commonly understand "primary memory". When one goes to the computer shop and ask for more memory, nobody thinks you want to buy a bigger HD.
The common usage of the unqualified term "memory" refers to main memory or primary storage, which has to be directly addressable, its one of its properties. Other uses of the word that are not so common need to be qualified or differentiated.
Therefore, when we talk about "RAM" we are thinking of the memory chips used in computers - even if HD can be considered memory of some kind, it is not what we refer to as RAM. It's the same convention as when we talk of ROM, forgetting that ROM chips are also RAM. A misname, but generally accepted, and we all know what its being referred to.
But this argument is completely byzantine and pointless.
- -- Cheers, Carlos Robinson
Carlos: Lots of good information in your replies. I have saved most of them off. Now at this link -- http://en.wikipedia.org/wiki/DVD-RAM they describe DVD-RAM. To me, this is not really RAM. On the mainframe, we called disk access "direct access" because you didn't have to read through the disk like you did a tape in order to get to the header and then read the data. The catalog (a VSAM file and, as I recall, and ISAM file before that) pointed to the disk volume and the Volume Table Of Contents (VTOC) on that volume then pointed to the beginning of the file. It was direct access but not RAM (you couldn't, as you mentioned earlier, say give me byte nnnn off of the disk). I would say that DVD-RAM should really be called DVD-DIRECT. From my reading of it, it works very much like a hard drive, which, as has been discussed, is not truly RAM. The discussion in the Wiki says that the tracks are laid concentrically as opposed to being one long track, as is apparently the case on regular DVDs. To me, that sounds like a hard drive layout. Am I missing something here, or is DVD-RAM just a poor choice of terms by these folks? Greg Wallace
On Sunday 24 July 2005 01:19, Greg Wallace wrote:
On Saturday, July 23, 2005 @ 3:35 PM, Carlos Robinson wrote:
The Friday 2005-07-22 at 04:15 +0200, Anders Johansson wrote:
And no, I do not accept naming HD memory as RAM memory,
By the way, if you're going to reply to something I wrote, reply to me when I write it.
Sorry, it wasn't meant as an answer to you in particular, that's why that paragraph was indented; it was a "side thought" after reading the whole thread.
I'll reformat the email to recover what I said exactly.
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
Notice the smiley, please.
You quoted thus:
for one thing: it is not directly addressable by the processor,
The definition of 'random access' does not include 'direct addressability without the use of a driver'
thus it is not even "memory".
That's just silly
No, it is not.
By "memory" we commonly understand "primary memory". When one goes to the computer shop and ask for more memory, nobody thinks you want to buy a bigger HD.
The common usage of the unqualified term "memory" refers to main memory or primary storage, which has to be directly addressable, its one of its properties. Other uses of the word that are not so common need to be qualified or differentiated.
Therefore, when we talk about "RAM" we are thinking of the memory chips used in computers - even if HD can be considered memory of some kind, it is not what we refer to as RAM. It's the same convention as when we talk of ROM, forgetting that ROM chips are also RAM. A misname, but generally accepted, and we all know what its being referred to.
But this argument is completely byzantine and pointless.
- -- Cheers, Carlos Robinson
Carlos:
Lots of good information in your replies. I have saved most of them off. Now at this link --
http://en.wikipedia.org/wiki/DVD-RAM
they describe DVD-RAM. To me, this is not really RAM. On the mainframe, we called disk access "direct access" because you didn't have to read through the disk like you did a tape in order to get to the header and then read the data. The catalog (a VSAM file and, as I recall, and ISAM file before that) pointed to the disk volume and the Volume Table Of Contents (VTOC) on that volume then pointed to the beginning of the file. It was direct access but not RAM (you couldn't, as you mentioned earlier, say give me byte nnnn off of the disk). I would say that DVD-RAM should really be called DVD-DIRECT. From my reading of it, it works very much like a hard drive, which, as has been discussed, is not truly RAM. The discussion in the Wiki says that the tracks are laid concentrically as opposed to being one long track, as is apparently the case on regular DVDs. To me, that sounds like a hard drive layout. Am I missing something here, or is DVD-RAM just a poor choice of terms by these folks?
Greg Wallace I also use these drives. I think the RAM stands for Random Access Media, since you can access this media like a hard drive. -- John R. Sowden AMERICAN SENTRY SYSTEMS, INC. Residential & Commercial Alarm Service UL Listed Central Station Serving the San Francisco Bay Area Since 1967 mail@americansentry.net www.americansentry.net
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2005-07-24 at 00:19 -0800, Greg Wallace wrote:
Am I missing something here, or is DVD-RAM just a poor choice of terms by these folks?
Many of those terms are not consistent with one another, like the classic RAM and ROM. It could be "Random Access Media" as John says - that would be appropriate, I think. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC44wctTMYHG2NR9URAva8AJ99XdeHfJhsMlH51t8CsZvE/YbIqQCeOmcI M+imfVn6gBfr0O09GZwb7Wc= =Ww2w -----END PGP SIGNATURE-----
On Sunday, July 24, 2005 @ 4:40 AM, Carlos Robinson wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The Sunday 2005-07-24 at 00:19 -0800, Greg Wallace wrote:
Am I missing something here, or is DVD-RAM just a poor choice of terms by these folks?
Many of those terms are not consistent with one another, like the classic RAM and ROM. It could be "Random Access Media" as John says - that would be appropriate, I think.
- -- Cheers, Carlos Robinson
Yeah, maybe that's it, but I think they're bastardizing the term. When a term has been around as long as RAM has, you'd think they'd come up with something else. The same would go for ROM or any number of other terminology that's been out there for many years. But I guess using the term RAM makes it "hotter" sounding. Sounds like a pretty handy type of storage, though I believe you can install software that makes regular DVD's direct access. Then you can use direct on some disks and sequential on other ones. I know such software exists for CDs (from Roxio, I have it), and I just assume it also exists for DVDs. Greg Wallace
On Sunday, July 24, 2005 @ 7:59 PM, I wrote:
On Sunday, July 24, 2005 @ 4:40 AM, Carlos Robinson wrote:
The Sunday 2005-07-24 at 00:19 -0800, Greg Wallace wrote:
Am I missing something here, or is DVD-RAM just a poor choice of terms by these folks?
Many of those terms are not consistent with one another, like the classic RAM and ROM. It could be "Random Access Media" as John says - that would be appropriate, I think.
- -- Cheers, Carlos Robinson
Yeah, maybe that's it, but I think they're bastardizing the term. When a term has been around as long as RAM has, you'd think they'd come up with something else. The same would go for ROM or any number of other terminology that's been out there for many years. But I guess using the term RAM makes it "hotter" sounding. Sounds like a pretty handy type of storage, though I believe you can install software that makes regular DVD's direct access. Then you can use direct on some disks and sequential on other ones. I know such software exists for CDs (from Roxio, I have it), and I just assume it also exists for DVDs.
Greg Wallace
On second thought, maybe having software that would do direct updates to a DVD using a regular burner would be difficult or impossible. Greg Wallace
On Monday, July 25, 2005 @ 2:35 AM, I wrote: ?On Sunday, July 24, 2005 @ 7:59 PM, I wrote:
On Sunday, July 24, 2005 @ 4:40 AM, Carlos Robinson wrote:
The Sunday 2005-07-24 at 00:19 -0800, Greg Wallace wrote:
Am I missing something here, or is DVD-RAM just a poor choice of terms by these folks?
Many of those terms are not consistent with one another, like the classic RAM and ROM. It could be "Random Access Media" as John says - that would be appropriate, I think.
- -- Cheers, Carlos Robinson
Yeah, maybe that's it, but I think they're bastardizing the term. When a term has been around as long as RAM has, you'd think they'd come up with something else. The same would go for ROM or any number of other terminology that's been out there for many years. But I guess using the term RAM makes it "hotter" sounding. Sounds like a pretty handy type of storage, though I believe you can install software that makes regular DVD's direct access. Then you can use direct on some disks and sequential on other ones. I know such software exists for CDs (from Roxio, I have it), and I just assume it also exists for DVDs.
Greg Wallace
On second thought, maybe having software that would do direct updates to a DVD using a regular burner would be difficult or impossible.
Greg Wallace
And then there's the extended lifetime of the DVD-RAM. Ok, NEVER MIND! Greg Wallace P. S.: I obviously haven't spent much time working with DVDs.
Greg Wallace wrote:
Yeah, maybe that's it, but I think they're bastardizing the term. When a term has been around as long as RAM has, you'd think they'd come up with something else. The same would go for ROM or any number of other terminology that's been out there for many years. But I guess using the term RAM makes it "hotter" sounding.
The problem is there's too many TLA's. (Three Letter Acronyms) ;-)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2005-07-25 at 06:54 -0400, James Knott wrote:
Greg Wallace wrote:
Yeah, maybe that's it, but I think they're bastardizing the term. When a term has been around as long as RAM has, you'd think they'd come up with something else. The same would go for ROM or any number of other terminology that's been out there for many years. But I guess using the term RAM makes it "hotter" sounding.
The problem is there's too many TLA's.
(Three Letter Acronyms) ;-)
¡Yes! As the normal ussage of the terms ROM and RAM are wrong (ROM chips are (usually) RAM as well), meaning that the real meaning of the acronym is not what we really mean when we use them, if you follow my meaning :-p, I don't care much about reusing the term for DVDs if they like. Let's live! ;-) - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC5MxEtTMYHG2NR9URAjcrAJ4w+OkygxUqOG6yEiQN1OMGg9bxGQCfWZAz Azj59Gk0IsTiP92U4OFXAiw= =6lMb -----END PGP SIGNATURE-----
Carlos E. R. wrote:
That "rule of thumb" was invented for windows, it makes no sense in Linux. There is no "rule", simply use as much or as little as you need, regardless on how much RAM there is available. It depends on the programs that will be run and their memory requirement.
Ah. I was wondering about that, since I've read both truisms. For my own part, I've never been able to understand the rationale for the 2X truism, so I've done my own thing, which is 768MB RAM, 2GB swap partition. My rationale is that I like to have several large programs available without having to wait for them to load and initialize every time I want to change from one to the other. 2GB swap lets me have them all at once, with occasional page faults rather than out-of-memory errors. Of course, switching to a recently unused program generates lots of page faults, but the initialization is already there in the swap. Startup time is almost unnoticeable. As I recall, that would have been a mistake until several years ago when the Linux memory manager was fixed so that large swap spaces no longer slowed the entire system down drastically.
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
I agree. Furthermore, it's not even "random" in the sense that internal memory is: you can't get to a location quickly. You have to move the head, then wait for the sector to come under the head and be recognized, then read in at least a sector, then copy the data to a convenient portion of internal memory, then ... John Perry
Ah. I was wondering about that, since I've read both truisms. For my own part, I've never been able to understand the rationale for the 2X truism, so I've done my own thing, which is 768MB RAM, 2GB swap partition. My rationale is that I like to have several large programs available without having to wait for them to load and initialize every time I want to change from one to the other. 2GB swap lets me have them all at once, with occasional page faults rather than out-of-memory errors. Of course, switching to a recently unused program generates lots of page faults, but the initialization is already there in the swap. Startup time is almost unnoticeable. The amount of swap space depends on how the system is going to be used as well as the amount of physical memory you have. The 2x was kind of a general rule of thumb that kind of went out the window with large memory systems. In your case, you have a need to allocate a large swap space since you want to leave a number of large programs resident. Most people today
On Sunday 24 July 2005 10:40 pm, John Perry wrote: probably don't need more that 512M. (I go back to the days when 1MB or RAM was considered a lot). But, when it is not unusual for a desktop system to have 1GB RAM with large (100GB+) hard drives, then allocating 2 or 3GB of swap is not excessive eventhough it is probably mostly wasted space. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
Jerry Feldman wrote:
... (I go back to the days when 1MB or RAM was considered a lot).
Heh -- I got my first home computer when 64K was an extravagantly expensive add-on, and OS-9 personal edition ran marvelously in my 16K CoCo ($399!).
But, when it is not unusual for a desktop system to have 1GB RAM with large (100GB+) hard drives, then allocating 2 or 3GB of swap is not excessive eventhough it is probably mostly wasted space.
I suppose it's needed for graphics-intensive programs (I use a couple of CAD systems myself), but I suspect that most people would not notice the swapping delays if they had just enough RAM and lots of swap. But then, RAM is so cheap, now, that I guess Denning's admonition to provide enough memory to satisfy the locality requirements and leave the rest to swap is less important than it was in the '70's :-). John Perry
On Monday 25 July 2005 1:24 pm, John Perry wrote:
Jerry Feldman wrote:
... (I go back to the days when 1MB or RAM was considered a lot).
Heh -- I got my first home computer when 64K was an extravagantly expensive add-on, and OS-9 personal edition ran marvelously in my 16K CoCo ($399!). My first maineframe was an IBM 7044 with 32K. My first home computer was an Apple II (not plus) with 4K. I upgraded it to 16K with discarded chips from work. most of the discarded chips were fine in a slow AppleII, but may have been removed because they failed a diagnostic.
I also worked on a POS system in the '70s that had 4K (12 bit words) - DEC PDP8 - used at all Burger King company stores in the early 1970s. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2005-07-25 at 10:24 -0400, Jerry Feldman wrote:
systems. In your case, you have a need to allocate a large swap space since you want to leave a number of large programs resident. Most people today probably don't need more that 512M.
You need some more swap than ram, if you intend to suspend to disk at some time. I have found that suspending is usefull, it is faster than halt/boot. Exactly how much swap is needed for suspending, I dunno, but somewhat more than real ram. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC6DpttTMYHG2NR9URAjDmAKCQ/krlVacq22jSJ/53Dq9oKDlSbgCdGFZw LsrlAqH2oRDSy0L06EIWQtg= =WnMP -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2005-07-24 at 22:40 -0400, John Perry wrote:
Ah. I was wondering about that, since I've read both truisms. For my own part, I've never been able to understand the rationale for the 2X truism, so I've done my own thing, which is 768MB RAM, 2GB swap partition.
I don't have time to answer today, so I'll just write a brief comment: Windows was limited to 2X, that was the absolute maximum. At least, that was so in win 3, and probably 95. That's the origin of that "rule of thumb". Linux is not so limited, you can add as much as you need (till the kernel complains, of course). For example, I have a system with 32Mb RAM and around 1 GB swap. I will not deny it is slow, but can't do anything else. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC5TQ0tTMYHG2NR9URAsj5AJ9R1A7NxUDP8E4iDF42KNAjpnSTLQCaAxSq mxDGSdyUDdd778CEKCsuYGk= =yTpE -----END PGP SIGNATURE-----
The Sunday 2005-07-24 at 22:40 -0400, John Perry wrote:
Ah. I was wondering about that, since I've read both truisms. For my own part, I've never been able to understand the rationale for the 2X truism, so I've done my own thing, which is 768MB RAM, 2GB swap partition.
I don't have time to answer today, so I'll just write a brief comment: Windows was limited to 2X, that was the absolute maximum. At least, that was so in win 3, and probably 95. That's the origin of that "rule of thumb". Linux is not so limited, you can add as much as you need (till the kernel complains, of course).
For example, I have a system with 32Mb RAM and around 1 GB swap. I will not deny it is slow, but can't do anything else. Actually, in Windows95 and forward swap was somewhat automatic. In Windows, SWAP was (and is) a file in the file system. I'm pretty sure the 2x (or 3x) rule of thumb is of Unix origin since we've been allocating swaps on Unix for quite a while before Windows knew what virtual memory was. At one time,
On Monday 25 July 2005 2:49 pm, Carlos E. R. wrote: the Linux swap was limited to something like 768M, and you could not allocate more, but that was fixed long before the 2.2 kernel. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2005-07-25 at 15:01 -0400, Jerry Feldman wrote:
Actually, in Windows95 and forward swap was somewhat automatic. In Windows, SWAP was (and is) a file in the file system.
That's the same as it was in Win 3.1. It was a file, it could be fixed or automatic. Same thing :-)
I'm pretty sure the 2x (or 3x) rule of thumb is of Unix origin since we've been allocating swaps on Unix for quite a while before Windows knew what virtual memory was.
Yes, of course, I only said that the 2X rule was a windows thing, not that wirtual memory was a windows thing. It was not possible to allocate more: if you did, it warned you that it would not use it.
At one time, the Linux swap was limited to something like 768M, and you could not allocate more, but that was fixed long before the 2.2 kernel.
Right. And no so long ago, there was a limit per swap partition or file, I think it was 128 Mb. But you could allocate dozens of those spaces. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC6DkZtTMYHG2NR9URAhirAJ0akgCVTEeNdYY0prv8RIJiAyqZLgCfRH8m H3nxUqdqPfQyeVQT0YE3yEU= =Gq4Q -----END PGP SIGNATURE-----
On Thu, 2005-07-28 at 03:47 +0200, Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The Monday 2005-07-25 at 15:01 -0400, Jerry Feldman wrote:
Actually, in Windows95 and forward swap was somewhat automatic. In Windows, SWAP was (and is) a file in the file system.
That's the same as it was in Win 3.1. It was a file, it could be fixed or automatic. Same thing :-)
I'm pretty sure the 2x (or 3x) rule of thumb is of Unix origin since we've been allocating swaps on Unix for quite a while before Windows knew what virtual memory was.
Yes, of course, I only said that the 2X rule was a windows thing, not that wirtual memory was a windows thing. It was not possible to allocate more: if you did, it warned you that it would not use it.
I was first exposed to unix (SUN OS) in 1987 and (as far as I can remember) I was told that swap was generally 2X the amount of memory. SUN OS was clearly around before windows was. Windows 1.0 was released in 1985.
At one time, the Linux swap was limited to something like 768M, and you could not allocate more, but that was fixed long before the 2.2 kernel.
Right. And no so long ago, there was a limit per swap partition or file, I think it was 128 Mb. But you could allocate dozens of those spaces.
-- Ken Schneider UNIX since 1989, linux since 1994, SuSE since 1998 "The day Microsoft makes something that doesn't suck is probably the day they start making vacuum cleaners." -Ernst Jan Plugge
I was first exposed to unix (SUN OS) in 1987 and (as far as I can remember) I was told that swap was generally 2X the amount of memory. SUN OS was clearly around before windows was. Windows 1.0 was released in 1985. It certainly was. Sun was founded in 1982. The X Window System was developed in the mid-1980s. I think that Apple released the Mac in the 1984 time
On Thursday 28 July 2005 8:28 am, Ken Schneider wrote: frame, but the Lisa predated the Mac. In any case, early Unix systems did not contain virtual memory, but used a technique called swapping, hence the term. Virtual memory for Unix systems started to appear in the early 1980s. before virtual memory, entire processes were swapped out. Today, Linux (and other Unix systems) use modern demand paged virtual memory where the system is broken down into a series of pages. The page size is a function of both the OS and the hardware. Today, when you run a program (in very general terms): The text section (eg. the instructions) are mapped read-only into your virtual memory from where the program resides. Those pages are not necessarily paged in at the start. The text pages may be shared by multiple instances of that program. Your initialized data is also mapped in, but those pages may be written to. If need be, those pages can be swapped out to the swap area. You .bss section (unititialized data) is also mapped, but the pages are not created until they are used. Most Linux programs also use shared libraries. These libraries are also mapped to the process' virtual memory, and swapped in as required. These pages are also shared by processes who use them. The bottom line is that virtual memory makes a very efficient use of physical memory. When the system starts to run short of physical memory, the first type of pages to be reused are those read-only pages because they do not have to be copied to the swap area. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
On Thursday, July 28, 2005 @ 4:28 AM, Ken Schneider wrote:
On Thu, 2005-07-28 at 03:47 +0200, Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The Monday 2005-07-25 at 15:01 -0400, Jerry Feldman wrote:
Actually, in Windows95 and forward swap was somewhat automatic. In Windows, SWAP was (and is) a file in the file system.
That's the same as it was in Win 3.1. It was a file, it could be fixed or
automatic. Same thing :-)
I'm pretty sure the 2x (or 3x) rule of thumb is of Unix origin since we've been allocating swaps on Unix for quite a while before Windows knew what virtual memory was.
Yes, of course, I only said that the 2X rule was a windows thing, not that wirtual memory was a windows thing. It was not possible to allocate more:
if you did, it warned you that it would not use it.
I was first exposed to unix (SUN OS) in 1987 and (as far as I can remember) I was told that swap was generally 2X the amount of memory. SUN OS was clearly around before windows was. Windows 1.0 was released in 1985.
At one time, the Linux swap was limited to something like 768M, and you could not allocate more, but that was fixed long before the 2.2 kernel.
Right. And no so long ago, there was a limit per swap partition or file,
I
think it was 128 Mb. But you could allocate dozens of those spaces.
Ken Windows XP will not let you allocate a swap file (pagefile.sys) that is greater than 1.5 times your RAM. Not sure if that is pertinent to Linux, but I just wanted to point it out. I personally use the 1.5 number and, if you let Linux compute the size of your swap space, at installation, it wants to allocate roughly that amount, though it seems to be just a little bit less than 1.5 times RAM. Greg Wallace
On Thursday 28 July 2005 9:19 am, Greg Wallace wrote:
Windows XP will not let you allocate a swap file (pagefile.sys) that is greater than 1.5 times your RAM. Not sure if that is pertinent to Linux, but I just wanted to point it out. I personally use the 1.5 number and, if you let Linux compute the size of your swap space, at installation, it wants to allocate roughly that amount, though it seems to be just a little bit less than 1.5 times RAM. The 2x rule of thumb has been around a long time from the days when we measured memory in K and not M or G. It comes down to how you intend to use your system. For the most part, a swap of 512M should be sufficient for most home systems, but it does not hurt to allocate more. Both RAM and disk are relatively inexpensive today. A large amount of swap space is really only necessary if you have an application that uses a large amount of data. -- Jerry Feldman <gaf@blu.org> Boston Linux and Unix user group http://www.blu.org PGP key id:C5061EA9 PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Thursday 2005-07-28 at 09:44 -0400, Jerry Feldman wrote:
For the most part, a swap of 512M should be sufficient for most home systems, but it does not hurt to allocate more.
You need a swap of around 1.5 times the existing ram for suspending to disk. That can be the rationale of yast during installation. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFC6WpStTMYHG2NR9URAohJAKCS+ipaJ2kZNP1tR72wi5RaKsB0wgCfQ3nw 3J2whTkggdrOFvixIwRjSPI= =DQ/P -----END PGP SIGNATURE-----
participants (16)
-
Anders Johansson
-
Bo Jacobsen
-
Bruce Marshall
-
Bryan Tyson
-
Carlos E. R.
-
Greg Wallace
-
James Knott
-
Jerry Feldman
-
John Andersen
-
John Perry
-
John R. Sowden
-
Ken Schneider
-
Markus Natter
-
Nick Jones
-
Per Jessen
-
Randall R Schulz