K-eff disagreement on thermal scattering

Hello to everyone.

I’m conducting a Validation study on a TRIGA Mark 2 reactor adopting Serpent2 as cross-code benchmark.

I’m experiencing some issues regarding the value of k-effective: the OpenMC model’s results seem to be more critical with respect to the values predicted by Serpent.
Below I report values at different Control Rods positions

  • FULL IN: k_OpenMC = 1.0004 ± 0.00056 , k_Serpent = 0.96862 ± 0.00075
  • FULL OUT: k_OpenMC = 1.04887 ± 0.00059 , k_Serpent = 1.02453 ± 0.00076
  • CRITICAL: k_OpenMC = 1.03287 ± 0.0006 , k_Serpent = 1.00202 ± 0.00079

The codes adopted the JEFF 3.3 library (for OpenMC, it has been downloaded from here) and they uses the same statistics of #n = 4000, 700 active + 300 inactive (Shannon entropy converged). Same temperature (‘interpolation’ method for OpenMC).

I repeated the calculations removing the S(alpha, beta) libraries from the material definition in both codes. Surprisingly, the k-effs seem to be (more) comparable:

  • FULL IN: k_OpenMC = 1.01387 ± 0.00055, k_Serpent = 1.00833 ± 0.00073
  • FULL OUT: k_OpenMC = 1.06651 ± 0.00057 , k_Serpent = 1.06824 ± 0.00077
  • CRITICAL: k_OpenMC = 1.04858 ± 0.00056 , k_Serpent = 1.0445 ± 0.00076

In particular, S(alpha, beta) were adopted for

  • H in H2O
  • Graphite
  • H in ZrH

Finally, I tried to change the OpenMC data library (using the environment variable from .bashrc) with the ones officially available, without any significant improvement. Below I report the k value from OpenMC in a FULL IN configuration:

  • JEFF 3.3 - k = 1.0004 ± 0.00056
  • ENDF/B-VII.1 - k = 0.99695 ± 0.00056
  • ENDF/B-VIII.0 - k = 0.99575 ± 0.00062

It seems a problem related to the S(alpha, beta) adoption.
Do you have any suggestion to carry on the analysis?

Thanks,
Lorenzo

addendum
I checked the volumes with the stochastic routine: they match with the real values.

A few thoughts/comments:

  • Your results without S(α,β) still do not agree (the differences are statistically significant), which means there are likely issues beyond just S(α,β)
  • The temperature treatment in Serpent is different than in OpenMC. To do an apples-to-apples comparison, you’d have to run a problem where materials are at a temperature for which cross sections are tabulated (i.e., no interpolation between two temperatures).
  • I’m not sure what your source of ACE files for JEFF 3.3 was, but there may be differences in how it was processed compared with the OpenMC HDF5 files for JEFF 3.3. The best way to get a consistent comparison is to take the ACE files you’re using with Serpent and convert those to HDF5.
  • In general, you may want to try to simplify your problem piece by piece to eliminate possible sources of differences.

Hello,

It seems that the problem was related to the traduction of the XS and thermal scattering libraries from ACE to HDF5.

May I ask you why did that occurs? I mean, from what I’ve understood, take data from here should guarantee to have a reliable dataset, consistent with respect to the original ACE.

Thank you,
Lorenzo

@lorenzoloi We did discover an issue with the data libraries that are available from the link you’ve posted above. Namely, there was a bug in NJOY that resulted in some multitemperature datasets not being written to ACE files correctly. This bug has been fixed in the latest release of NJOY and I’ve gone ahead and regenerated all of our official libraries available on openmc.org. Can you try redownloading the JEFF 3.3 data from there and see if it gives a more consistent comparison? Very curious to know if this was indeed the root cause of the problem you’re observing.

Hello @paulromano,
I downloaded again the JEFF 3.3 from the Official Data Libraries: by running a quick calculation it give me the same results as when I opened the topic.
Was the bug related also to the thermal scattering dataset?

I think the bug could in theory affect both incident neutron or thermal scattering data. In general though, if you want the data to be consistent between codes, I will repeat my suggestion from before:

I have no idea how the JEFF 3.3 data for Serpent was processed, so there very well may be some differences between that and the official data libraries for OpenMC. If you’re able to do a comparison using the same data you used for Serpent, that should hopefully tell us if the difference is indeed due to how the data was processed. Of course, even in that case I’d still like to know what is different about the data that is leading to different results.