How to use load distribution vector for parallel processing with CPUs of different speed


When using the parallel solver, the default partitioning method will try to put the same number of cells in each partition. For example, when a case file is loaded into a parallel session using 3 CPUs, a partition statistic report (Parallel > Partition > Print Active Partitions) might look something like this.

>> 3 Active Partitions:
P Cells I-Cells Cell Ratio Faces I-Faces Face Ratio Neighbors
0 22438 569 0.025 69643 626 0.009 1
1 22438 659 0.029 71227 1117 0.016 1
2 22438 1227 0.055 69949 1743 0.025 2

This is great if all the CPUs being utilized for the parallel run operate at the same clock speed. However this is not always the case. Imagine a network with CPUs operating at 2.4 GHz, 3 GHz and 3 GHz. If the case in the example above is loaded into such a network, the parallel run will be limited by the slowest CPU, so in effect the 3 GHz cpus would not be used to their fullest advantage. For a network like this, FLUENT offers a means to create partitions with unequal numbers of cells, in order to make better use of each CPU.
In the example above, the parallel network (2 CPUs at 3 GHz and 1 CPU at 2.4 GHz) can be used most efficiently if partitions are created in such a way that one partition contains 28.6 percent of the cells (2.4/8.4) and the other two partitions each contain 35.7 percent of the cells (3/8.4). This can be accomplished by using a load distribution vector to control the percentage of the cells assigned to each partition. The load distribution vector is assigned through a text command:

parallel/partition/set/load-distibution (3 3 2.4)

More information about this command is available in Section 30.4.6 of the Fluent 6.1 User's Guide

<a target=_blank href="http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1055.htm">http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1055.htm</a>http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1055.htm

In this case, the procedure to follow is
1. Start a serial version of FLUENT

2. Set the load distribution vector using the command described above. The total number of numerical values between the parentheses should be equal to the total number of partitions to be created. For a 2-process parallel job where one CPU had a clock speed of 1 GHz and the other CPU had a clock speed of 2 GHz, the command would be
parallel/partition/set/load-distribution (2 1). But in this example, the command is as previously stated

parallel/partition/set/load-distribution (3 3 2.4)

3. Open the Partition Grid panel (Parallel > Partition), select the number of partitions, the desired bisection method and any other desired settings and click Partition. In this example, the desired number of partitions is 3.

4. Check the partition statistics by clicking Print Partitions in the Partition Grid panel. In this example, the statistics might look something like this

>> 3 Partitions:
P Cells I-Cells Cell Ratio Faces I-Faces Face Ratio Neighbors
0 24039 569 0.024 74575 626 0.008 1
1 24041 628 0.026 76123 1049 0.014 1
2 19234 1196 0.062 60053 1675 0.028 2

5. Save the case and data file and exit the serial FLUENT session

6. Start a parallel Fluent sesion with the desired number of CPUs.

<a target=_blank href="http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1041.htm">http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1041.htm</a>http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1041.htm

Note that partition-0 will be mapped onto compute node-0, partition-1 onto compute node-1, and so on. Therefore, in this example it would be necessary for compute nodes 0 and 1 to be spawned on the 3 GHz CPUs and compute node 2 to be spawned on the 2.4 GHz CPU.

7. Read the case and data file from Step 4 and begin iterating.

Alternatively, in this situation the load balancing feature (http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1058.htm#loadbalance ) could be enabled to have FLUENT automatically attempt to discern any difference in load among the compute nodes.





Show Form
No comments yet. Be the first to add a comment!