
Chaitanya, On Wednesday 14 September 2005 07:49, Chaitanya Krishna A wrote:
Hi,
This could be a bit off the list, but still ...
The output of uname -a of my machine is as below Linux achala 2.6.11.4-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005 i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory Processor. I have a two processors on my mother board.
SMP: "Symmetric Multi-Processor"; "Symmetric" because all processors are co-equal in their capabilities and ability to access shared resources such as memory and I/O ports / devices.
I am doing my work in Molecular Dynamics simulations. So most of the time I would be doing a lot of number crunching. Now if start a job on my machine, does it automatically run using both the processors on my machine, or will I have to use a message passing library like MPI to use both the processors?
Nothing truly automatically parallelizes. Depending on the language used to implement the application, it can be more or less work to exploit multi-processor hardware. E.g., if the application is written in Java ad you're using the latest JVM from Sun, then at a minimum you get parallelization of I/O and garbage collection (w.r.t. to the main thread or threads that perform the work of your application).
I experimented with this some time back, I ran the same job with ./executable and also mpirun -n 2 ./executable on my machine (no clustering or anything). The second one gave maginally better results and top showed two processes running. Can someone explain what's happening?
Clearly you're referring to some specific MPI system (probably <http://www-unix.mcs.anl.gov/mpi/>?) of which I'm not aware, so I cannot say definitively whether it can exploit your multiprocessor x86 system. Are you certain your application is written to use this MPI software? Keep in mind that depending on the nature of the algorithms that dominate the application in question the magnitude of any speed-up possible _in principle_ varies. In practice, of course, one rarely sees the full speed-up that is possible because of various overhead in the software the provides the parallelism (your MPI system, e.g.).
Regards, Chaitanya.
Randall Schulz