Doubt about SMP's and parallel jobs
Hi, This could be a bit off the list, but still ... The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board. I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors? I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening? Regards, Chaitanya. __________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com
Chaitanya, On Wednesday 14 September 2005 07:49, Chaitanya Krishna A wrote:
Hi,
This could be a bit off the list, but still ...
The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board.
SMP: "Symmetric Multi-Processor"; "Symmetric" because all processors are co-equal in their capabilities and ability to access shared resources such as memory and I/O ports / devices.
I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors?
Nothing truly automatically parallelizes. Depending on the language used to implement the application, it can be more or less work to exploit multi-processor hardware. E.g., if the application is written in Java ad you're using the latest JVM from Sun, then at a minimum you get parallelization of I/O and garbage collection (w.r.t. to the main thread or threads that perform the work of your application).
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Clearly you're referring to some specific MPI system (probably <http://www-unix.mcs.anl.gov/mpi/>?) of which I'm not aware, so I cannot say definitively whether it can exploit your multiprocessor x86 system. Are you certain your application is written to use this MPI software? Keep in mind that depending on the nature of the algorithms that dominate the application in question the magnitude of any speed-up possible _in principle_ varies. In practice, of course, one rarely sees the full speed-up that is possible because of various overhead in the software the provides the parallelism (your MPI system, e.g.).
Regards, Chaitanya.
Randall Schulz
On Wednesday 14 September 2005 17.05, Randall R Schulz wrote:
Chaitanya,
On Wednesday 14 September 2005 07:49, Chaitanya Krishna A wrote:
Hi,
This could be a bit off the list, but still ...
The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board.
SMP: "Symmetric Multi-Processor"; "Symmetric" because all processors are co-equal in their capabilities and ability to access shared resources such as memory and I/O ports / devices.
I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors?
Nothing truly automatically parallelizes. Depending on the language used to implement the application, it can be more or less work to exploit multi-processor hardware. E.g., if the application is written in Java ad you're using the latest JVM from Sun, then at a minimum you get parallelization of I/O and garbage collection (w.r.t. to the main thread or threads that perform the work of your application).
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Clearly you're referring to some specific MPI system (probably <http://www-unix.mcs.anl.gov/mpi/>?) of which I'm not aware, so I cannot say definitively whether it can exploit your multiprocessor x86 system. Are you certain your application is written to use this MPI software?
Keep in mind that depending on the nature of the algorithms that dominate the application in question the magnitude of any speed-up possible _in principle_ varies. In practice, of course, one rarely sees the full speed-up that is possible because of various overhead in the software the provides the parallelism (your MPI system, e.g.).
Regards, Chaitanya.
Randall Schulz
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
I have found that running calculations and image processing and other CPU intensive tasks only benefits if the code is parallelized from the start. The one thing that DOES speed up is the overall response of the system. (The number crunching thread use one CPU and the system the other.) But if the code is written to utilize SMP, then it will be alot faster. An example was my Dual Celeron 433 MHz (god rest) which was alot more responsive under heavy load, processing images than the current Single CPU P3/733 MHz running the same setup/system. (GIMP isnt written to utilize SMP systems). -- /Rikard --------------------------------------------------------------- Rikard Johnels email : rikard.j@rikjoh.com Web : http://www.rikjoh.com Mob : +46 (0)763 19 76 25 PGP : 0x461CEE56 ---------------------------------------------------------------
On Wednesday 14 September 2005 18.56, Rikard Johnels wrote:
On Wednesday 14 September 2005 17.05, Randall R Schulz wrote:
Chaitanya,
On Wednesday 14 September 2005 07:49, Chaitanya Krishna A wrote:
Hi,
This could be a bit off the list, but still ...
The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board.
SMP: "Symmetric Multi-Processor"; "Symmetric" because all processors are co-equal in their capabilities and ability to access shared resources such as memory and I/O ports / devices.
I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors?
Nothing truly automatically parallelizes. Depending on the language used to implement the application, it can be more or less work to exploit multi-processor hardware. E.g., if the application is written in Java ad you're using the latest JVM from Sun, then at a minimum you get parallelization of I/O and garbage collection (w.r.t. to the main thread or threads that perform the work of your application).
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Clearly you're referring to some specific MPI system (probably <http://www-unix.mcs.anl.gov/mpi/>?) of which I'm not aware, so I cannot say definitively whether it can exploit your multiprocessor x86 system. Are you certain your application is written to use this MPI software?
Keep in mind that depending on the nature of the algorithms that dominate the application in question the magnitude of any speed-up possible _in principle_ varies. In practice, of course, one rarely sees the full speed-up that is possible because of various overhead in the software the provides the parallelism (your MPI system, e.g.).
Regards, Chaitanya.
Randall Schulz
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
I have found that running calculations and image processing and other CPU intensive tasks only benefits if the code is parallelized from the start. The one thing that DOES speed up is the overall response of the system. (The number crunching thread use one CPU and the system the other.) But if the code is written to utilize SMP, then it will be alot faster. An example was my Dual Celeron 433 MHz (god rest) which was alot more responsive under heavy load, processing images than the current Single CPU P3/733 MHz running the same setup/system. (GIMP isnt written to utilize SMP systems).
-- /Rikard
--------------------------------------------------------------- Rikard Johnels email : rikard.j@rikjoh.com Web : http://www.rikjoh.com Mob : +46 (0)763 19 76 25 PGP : 0x461CEE56 ---------------------------------------------------------------
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Here is some pointers: http://www.tldp.org/HOWTO/SMP-HOWTO.html (Linux specific) http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/icppw/2005/2381/00/2381toc.xml&DOI=10.1109/ICPPW.2005.46 (generic paper) -- /Rikard --------------------------------------------------------------- Rikard Johnels email : rikard.j@rikjoh.com Web : http://www.rikjoh.com Mob : +46 (0)763 19 76 25 PGP : 0x461CEE56 ---------------------------------------------------------------
On Thursday 15 September 2005 00:49, Chaitanya Krishna A wrote:
Hi, snip
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Regards, Chaitanya.
Hi, My mate and I are Mathematical Physicists and do a lot of number crunching. My mate has a couple of multi-processor machines so I'll ask him this W/E while we celebrate his birthday. But my understanding is that to get the best out of your machine you need to use a vector processing language like FORTRAN and your code has to be optimised so that one cpu handles part of the matrix while the second (or more) handles a different part, or even one cpu evaluates part of the formula while the other evaluates the remainder. A big part of the skill is knowing how to structure your code so that the compiler finds it easy to 'break' into shareable parts. That is, none of this fancy coding for efficiency - let the compiler do the optimising. Most people only get the benefit when they do two things at once, like playing a game while doing an FTP, and the system just allocates two very separate jobs to the two different cpu's. But my understand is that you want both cpu's to work on the same job: for this you need to do something as above. Talk with you next week if nobody else replies to your question. Regards, Colin
On Thursday 15 September 2005 00:49, Chaitanya Krishna A wrote:
Hi,
This could be a bit off the list, but still ...
The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board.
I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors?
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Regards, Chaitanya.
Hi Chaitanya, I had a chat with my friend re parallel processing. He said that it is generally quite complex to modify your code to make proper use of multiple processors. There are special techniques to setting up a structure to the code so that MPI processing can manage the separation of the work. This can be done in any language, but my friend uses FORTRAN which has had separation over multiple cpus built into the language for some time now. So, I am sorry but I am not much help. Are you using FORTRAN? Regards, Colin
Colin Carter wrote:
On Thursday 15 September 2005 00:49, Chaitanya Krishna A wrote:
Hi,
This could be a bit off the list, but still ...
The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board.
I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors?
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Regards, Chaitanya.
Hi Chaitanya, I had a chat with my friend re parallel processing. He said that it is generally quite complex to modify your code to make proper use of multiple processors. There are special techniques to setting up a structure to the code so that MPI processing can manage the separation of the work. This can be done in any language, but my friend uses FORTRAN which has had separation over multiple cpus built into the language for some time now. So, I am sorry but I am not much help. Are you using FORTRAN? Regards, Colin
In addition SMP stands for Symmetrical Multi-Processors which means there is more than one processor available to do the work using shared memory and other resources. There is a SMP-HOWTO for more of an insight if needed. Regards Sid. -- Sid Boyce ... Hamradio License G3VBV, licensed Private Pilot Retired IBM/Amdahl Mainframes and Sun/Fujitsu Servers Tech Support Specialist Microsoft Windows Free Zone - Linux used for all Computing Tasks
participants (5)
-
Chaitanya Krishna A
-
Colin Carter
-
Randall R Schulz
-
Rikard Johnels
-
Sid Boyce