On 01/19/2017 02:49 AM, Per Jessen wrote:
Carlos E. R. wrote:
On 2017-01-18 22:26, Greg Freemyer wrote:
My mistake:
============ total used free shared buffers cached Mem: 1.4G 1.3G 93M 2.3M 1.8M 245M -/+ buffers/cache: 1.1G 340M Swap: 31G 217M 31G ============
That really makes no sense. Maybe my hardware is too new?
It clearly is not finding all the memory. You need to look at the log of the booting.
You are not using 32bit 13.1, per chance?
With that size memory, it would be running a PAE kernel.
OMG! The PAE mechanism involves a mapping table for the 32-bit system to address more than 4G. I'll leave aside the issue of why anyone might need for than 4G when many of us run excellent systems in that or less. I believe Per Jensen mentioned a server that had been up for over 4 years with just 2G. I'm sure that more than 4G I have in my 64-bit desktop would be nice for my photo and video editing, but realistically I know that the real limitation is the framebuffer and rendering speed/capability of my graphics system. I need to throw money/technology at that first even though memory is cheaper, but to throw more memory I'd need a new mobo as well, so in reality a new GPU would be cheaper overall. Santa was kind in the camera department this year, not the computer department. If I were corporate and the LAN/SAN were the limit I'd see about gigabit networking or optical networking. But then again, all my machines, except for a few from the Closet of Anxieties with 800Mhz 32-bit/1GRAM are 64-bit. Yes there was always the argument that for a given RAM speed and datapath (remember the 8088, the 8086 with an 8-bit data path for 'backward compatibility' and the need for fewer TTL support components, which was why it was chosen for the original IBM-PC? well by the time it actually got into production the whole microprocessor/TTL landscape and pricing had changed to invalidate the original justification) that the smaller opcodes and reduced fetch time and the fact that most loops were small, requiring only an 8-bit counter and such blitherations meant that upgrade wasn't worth it? Heck,, I remember in the 1960 hearing a guy argue that stereo hi-fi wasn't worth it, the extra cost of components for the second channel, the extra loudspeaker, the more complicated pick-up on the record player, the more complex recording/mixing equipment, it was all a sale conspiracy to get us to spend more money. The along come LPs, cassettes and DVDs, each invalidating the previous technology ... But "-pae"? Even so, a processes address space remains at 32-bits, meaning it can only access a maximum of 4GB of memory. The OS however can access a 64GB address space, allocating 4GB chunks to processes. This is done via a mapping table. The size of the mapping table is going to depend on the actual amount of physical memory. so the more physical memory the bigger that table is. And that bale has to be in the space under the 4G boundary, in kernel space. I'm not sure I like the way this is going. If this were critical, if this were corporate, I'd seriously look at getting a new mobo and a 64-but processor. As a manager, I look at this way: the cost of time (that is, salary or consultant's fee) flutzing around with this, ongoing, compared to the one-time cost of a new mobo and CPU and the one-time cost of installing the same will almost certainly work out in favour of the latter. At home, on a hobby budget, things might be different. The hobby budget is competing with other household and family expenses. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org