Question:

How can you determine how much simulation time is needed to accurately capture time averaged data in an unsteady simulation?


Answer:

We have unsteady data which we must average. The question is: what is a good average time? The answer is not that simple! Lets imagine the following: you are tracking, in time, a perfectly sinusoidal signal of period T. Its mean is 0. In reality, we do not know T, so here is what happens when you select the post-processing time t

t = 1/4 T -> mean = 0.53
t = 1/2 T -> mean = 0.54
t = 3/4 T -> mean = 0.16
t = T -> mean = 0.0

So unless you are lucky, you will not capture the correct mean if t is not large enough when compared to T. The same exercise with larger t's:

t = 5T + 1/4 T -> mean = 0.03
t = 5T + 1/2 T -> mean = 0.05
t = 5T + 3/4 T -> mean = 0.02
t = 5T + T -> mean = 0.0

And again:

t = 10T + 1/4 T -> mean = 0.01
t = 10T + 1/2 T -> mean = 0.03
t = 10T + 3/4 T -> mean = 0.01
t = 10T + T -> mean = 0.0

Etc. If t >> T, the difference is negligible if we take t = NT (N integer) exactly or t = Nt + fraction of T.

We do the same thing for the LES/DES/SAS averaging.

1) Create a line where post-processing is relevant. For example, we create a line and collect velocity magnitude.
2) We save the data at a regular time interval t (i.e. autosave at regular number of iterations)
3) We plot the mean velocity on the line at the different autosave (i.e. we have data for a time interval t, 2t, 3t, 4t, etc).
4) Plotting them on top of each other, you will notice that they will look more and more identical as the time interval increases
5) Once the curve are quasi identical (i.e. results at Nt and (N+1)t are quasi identical), you will know that your time interval is large enough and that your time-averaged data is statistically correct.





Show Form
No comments yet. Be the first to add a comment!