Hello to everyone.
I’m conducting a Validation study on a TRIGA Mark 2 reactor adopting Serpent2 as cross-code benchmark.
I’m experiencing some issues regarding the value of k-effective: the OpenMC model’s results seem to be more critical with respect to the values predicted by Serpent.
Below I report values at different Control Rods positions
- FULL IN: k_OpenMC = 1.0004 ± 0.00056 , k_Serpent = 0.96862 ± 0.00075
- FULL OUT: k_OpenMC = 1.04887 ± 0.00059 , k_Serpent = 1.02453 ± 0.00076
- CRITICAL: k_OpenMC = 1.03287 ± 0.0006 , k_Serpent = 1.00202 ± 0.00079
The codes adopted the JEFF 3.3 library (for OpenMC, it has been downloaded from here) and they uses the same statistics of #n = 4000, 700 active + 300 inactive (Shannon entropy converged). Same temperature (‘interpolation’ method for OpenMC).
I repeated the calculations removing the S(alpha, beta) libraries from the material definition in both codes. Surprisingly, the k-effs seem to be (more) comparable:
- FULL IN: k_OpenMC = 1.01387 ± 0.00055, k_Serpent = 1.00833 ± 0.00073
- FULL OUT: k_OpenMC = 1.06651 ± 0.00057 , k_Serpent = 1.06824 ± 0.00077
- CRITICAL: k_OpenMC = 1.04858 ± 0.00056 , k_Serpent = 1.0445 ± 0.00076
In particular, S(alpha, beta) were adopted for
- H in H2O
- Graphite
- H in ZrH
Finally, I tried to change the OpenMC data library (using the environment variable from .bashrc) with the ones officially available, without any significant improvement. Below I report the k value from OpenMC in a FULL IN configuration:
- JEFF 3.3 - k = 1.0004 ± 0.00056
- ENDF/B-VII.1 - k = 0.99695 ± 0.00056
- ENDF/B-VIII.0 - k = 0.99575 ± 0.00062
It seems a problem related to the S(alpha, beta) adoption.
Do you have any suggestion to carry on the analysis?
Thanks,
Lorenzo
addendum
I checked the volumes with the stochastic routine: they match with the real values.