Normalization of Flux Tally values with different mesh fineness

Hello everyone,

I am doing a study on mesh fineness through a pressure vessel and tallying flux with an unstructured mesh. I have read the discussion NOrmalizing Tally to get Flux value - #11 by paulromano but am still not sure if I am normalizing correctly.

I have a coarse mesh with 40k elements and a fine mesh with 700k elements, and I ran OpenMC on both meshes with 40k particles, 200k particles, and 1 million particles.

The output .vtk file for the coarse mesh looks like this:

image

And I pulled the mean flux out of Paraview as ~4e6 [neutron-cm/source].
It is nearly the same for the fine mesh, and also the same across all different particle counts.

However, when I try to normalize, that’s where I am getting almost 2 orders of magnitude difference between meshes, which makes me think I may be accounting for volume or heating incorrectly.

Using 3.5 MW thermal power P, extracting heating score from the heating tally for H (which curiously does not change with increased particle count), and using the average tet volume V, I compute the following normalization factor:

f = P / (H * 1.602*10^-19 * V)

Where the mean coarse tet volume is ~ 15 cm3.
and the mean fine tet volume is ~ 0.9 cm3.

I end up with:
Coarse mesh flux: ~7e10 [neutron/cm2-s]
Fine mesh flux: ~1e12 [neutron/cm2-s]

Two main questions I have are:

  1. The heating value should change with increased particle count, yes? It does not appear to be changing with particle count for me.
  2. Are the power (total power from the core) and volume (volume of each tet element) accounted for correctly?

It makes sense that since the fine mesh has a volume two orders of magnitude smaller than the coarse mesh, the normalization factor which is inversely related to volume will be two orders of magnitude higher, therefore the flux will also be higher.

But intuitively, it does NOT make sense to me that a finer mesh would yield a result two orders of magnitude higher (if both are truly converged, which I believe they are, based on the different particle counts).

Can anyone weigh in on this?
Thank you all!!

Hi @kellythomas,

Thanks for using the unstructured mesh features and thanks for the query! It’s an interesting and nontrivial study to perform!

  1. The heating value should change with increased particle count, yes? It does not appear to be changing with particle count for me.
  • In the case that the runs are identical with the exception of a new mesh for the tally (i.e. the heating tally is exactly the same):
    Because the heating tally isn’t dependent on the mesh and the initial random number seed of the simulation is the same, the particles will undergo the exact same interactions as the prior simulation. This will result in the same global heating score as the previous simulation.

  • In the case where the number of particles is different between the runs (i.e. the heating score is very very close to the same):
    Because the heating value is a global score, it’s likely that it converges much more quickly than the results in the mesh elements. So I’m thinking that you’re running enough particles in either case that this value is very tightly converged. Does the std. dev. of this tally indicate that there should be more variation of this value?

  1. Are the power (total power from the core) and volume (volume of each tet element) accounted for correctly?

If you’re pulling results from the VTK file via Paraview, those values are already volume-normalized when written (relevant lines of code).

For a flux score with tally units \frac{particle-cm}{source-particle}, this means that those values are now in units of \frac{particle}{\text{source particle}-cm^{2}}. Once multiplied by the normalization factor of \frac{\text{source particle}}{s}, this will result in the physical flux units \frac{particle}{cm^{2}-s}.

So for the normalization factor, the number of fission neutrons generated per second can be computed as described in the post you’ve linked to – no volume factor needed in this computation. There’s also a more detailed explanation on computation of this value here.

Based on all this, I think the 2 OM difference you’re seeing is coming from the volume factor applied when computing the normalization term.

I hope this helps! Let me know if you have any additional questions.

Best,

Patrick

(Note: I’m aspiring to update the unit notation here to a more readable LaTeX format soon if we decided to enable that feature here.)

1 Like

Hi @pshriwise! Thank you for your thorough response. This helps, once I take the volume out of the normalization factor I’m getting comparable fluxes between mesh fineness. That’s a relief!

Also–The link for the more detailed explanation doesn’t seem to be working for me. Could you link it again?

Also, are the results written to the statepoint file also volume-normalized when using the mesh tallies?

All the results written to the statepoint file are raw results, so they don’t have any normalization applied to them.