I built a model and try to benchmark it with the experimental critical states of the ETRR2 research reactor.

This reactor has a small light water moderated and reflected core, with several beryllium reflector elements added from core to core. The fuel is in the form of thin plates with U3O8 mixed with aluminium in its meat. The fuel region is not too big - 45 x 50 x 80 cm - and this is also the region where I distribute the initial source guess in the criticality calculations.

For most of the critical states I model, I get keff close to 1 (about 200-500 pcm from below). However, **for several critical states I suddenly get -1000 to -3000 pcm!** Usually, when I repeat the calculation with different initial random number seed, I get again about -300 pcm. These calculations were done on my laptop with 200 batches (30-50 inactive) and 10,000 or 100,000 particles per batch. I used the 0.14.0 OpenMC version.

I checked keff and Shannon entropy convergence. They both seem to converge in few tens of batches, however, from time to time, as I said, keff converges to the “wrong” value!

I assumed the statistics might not be enough and did the same calculations on a cluster (0.13.4 version of OpenMC installed there) with 1,000,000 particles per batch. I get the same behavior, even worse! It seems to become worse for cores that contain more beryllium elements around the fuel elements.

Any ideas? Do I have some stupid geometry bug? (I plotted all the cores with openmc-plotter; they look as I intended them to look…) Or maybe some strange S(alpha,beta) issue? (I saw a post here about it; I use c_U_in_UO2, for instance).