**** Entered By: akpaul @ 06/09/2005 05:25 PM ****
Q. I am using the "HP MPI Distributed Parallel for x86 64" start method for my distributed parallel run. The system has multiple interconnects. How can I be sure that CFX is using the Myrinet interconnect and not standard Ethernet?
A. There is a command line option which you can manually add to the start methods file in CFX_ROOT/etc/start-methods.ccl file, to get output from HP-MPI on which interconnect is being used.
Make a local copy of this file under ~/.cfx/10.0/start-methods.ccl {on windows %home%/.cfx/10.0/start-methods.ccl)
[If the '10.0' folder does not exist, create one]
Modify this line:
START METHOD: HP MPI Distributed Parallel
Start Command = mpirun -f %{req:appfile}
END
to:
START METHOD: HP MPI Distributed Parallel
Start Command = mpirun -prot -f %{req:appfile}
END
HP MPI will print the communication mode it selects to the std output (command window). If it runs Myrinet it will print "GM" in the output for each node in the parallel run.. A typical output will look like:
Host 0 -- ip 129.40.203.65 -- ranks 0 - 1
Host 1 -- ip 129.40.203.66 -- ranks 2 - 3
host | 0 1
======|===========
0 : SHM GM1 : GM SHM
If you want to force HP MPI to use Myrinet then add the following:
START METHOD: HP MPI Distributed Parallel
...Start Command = mpirun -gm -f %{req:appfile}
END
You might just cut and paste your own start method:
START METHOD: HP MPI Myrinet Distributed
Start Command = mpirun -gm -f %{req:appfile}
END
**** Entered By: akpaul @ 06/09/2005 05:25 PM **** Q. I am using the "HP MPI Distributed Parallel for x86 64" start method for my distributed parallel run. The system has multiple interconnects. How can I be sure that CFX is using the Myrinet interconnect and not standard Ethernet? A. There is a command line option which you can manually add to the start methods file in CFX_ROOT/etc/start-methods.ccl file, to get output from HP-MPI on which interconnect is being used. Make a local copy of this file under ~/.cfx/10.0/start-methods.ccl {on windows %home%/.cfx/10.0/start-methods.ccl) [If the '10.0' folder does not exist, create one] Modify this line: START METHOD: HP MPI Distributed Parallel Start Command = mpirun -f %{req:appfile} END to: START METHOD: HP MPI Distributed Parallel Start Command = mpirun -prot -f %{req:appfile} END HP MPI will print the communication mode it selects to the std output (command window). If it runs Myrinet it will print "GM" in the output for each node in the parallel run.. A typical output will look like: Host 0 -- ip 129.40.203.65 -- ranks 0 - 1 Host 1 -- ip 129.40.203.66 -- ranks 2 - 3 host | 0 1 ======|=========== 0 : SHM GM1 : GM SHM If you want to force HP MPI to use Myrinet then add the following: START METHOD: HP MPI Distributed Parallel ...Start Command = mpirun -gm -f %{req:appfile} END You might just cut and paste your own start method: START METHOD: HP MPI Myrinet Distributed Start Command = mpirun -gm -f %{req:appfile} END |
||
|