Primitive Error at Node 0: dump_face out of memory

Problem Description:
You have just purchased a dual processor machine with 4 GB of RAM. You have a large case file (between 3 - 4 million cells). While reading in the case file in parallel you receive the following error and FLUENT aborts:

Primitive Error at Node 0: dump_face out of memory
(or Node 1:, 2:, etc.)

Memory and 32-bit Processors

A 32-bit processor provides addressing of up 4 GB of memory. There is a 2 GB memory limitation per process. For example, if you have a single processor with 4 GB of memory running a FLUENT job with for example, 3.5 million nodes, you will run out of memory because of the limitation of 2 GB per process. However, if you have a dual processor machine with 4 GB of memory and run FLUENT in parallel (shared memory) than you would be able to access up to approximately 3.5 GB for the FLUENT run.

How FLUENT works?

The Host and Cortex processes together use a considerable amount of RAM during the partitioning method when trying to distribute the case file to the computing nodes. Because the overall amount of memory is limited to 4 GB, the computing nodes get restricted to what is left from the host and cortex process. See an "Introduction to Parallel Processing <http://www.fluentusers.com/fluent61/doc/ori/html/ug/node1039.htm >" posted on the Fluent Users Service Center <http://www.fluentusers.com > (you must have a username and password to access this site)
For example, take the following figures 0.7 GB for host + cortex and 1.6 GB per computing node at reading time. On a dual processor machine the following amount of RAM will be in use: 0.7GB + 1.6 GB * 2 nodes = 0.7 + 3.2 =3.9 GB

Based on this total the Operating System would reach its limit and FLUENT may crash.

FLUENT Tips

Tip 1
By setting the following scheme file command at the prompt in the text interface of FLUENT (and must include the parentheses) you would reduce the memory usage from 0.7 GB to 0.35 GB (cortex + host processes) so the overall memory utilized would be around 3.6 - 3.7 GB
(rpsetvar 'parallel/read-use-pack? #f)

Have in mind that the Operating System needs approximately 100 MB in order to manage the hardware and software resources of the computer system, therefore, there is not a lot of memory available.

Tip 2
Another alternative is to have the cortex and host processes run on another machine. For example, on the host/cortex machine you would start the FLUENT run. The Host file would contain all the node(s) on the cluster except the host/cortex machine.
By doing so, compute nodes would not be restricted by the requirements of memory of the Host and Cortex processes. Furthermore, you would speedup the reading procedure by enlarging the size of communication buffers and other parameters involved at the reading procedure.





Show Form
No comments yet. Be the first to add a comment!