Dear all,
Are parallel simulations supposed to yield identical results to a serial simulation? I’ve been testing using some in-house problems and reproducibility has been unreliable.
In our “candu” test problem, parallel k-eff reproduces serial k-eff across a broad range of thread/node counts and two different compute clusters. The results also agree to 0.01mk when comparing between platforms.
Using the same openmc binaries, reproducibility breaks down for different problems. One such problem is a simplified HTGR compact with explicit triso particles. Another is a simplified lattice cell inspired by the MSRE.
Is this a known issue?
Hi @AlexandreTrottier and welcome to the community. Parallel simulations can and do yield identical results to a serial simulation, but only when the number of particles and batches simulated is not very high. If you run a lot of particles / batches, eventually you end up with some results that are different by virtue of the fact that floating point arithmetic is not associative. Because it is a Monte Carlo simulation, even small differences in results (e.g., current batch k-effective) can result in the parallel run completely “diverging” from the serial run (diverging in the sense of appearing to be a completely different random sequence of events — however, it should converge to the same answer).
Ability to obtain reproducible results can also be related to the physics. I’ve found that it’s harder to get reproducible results for problems with a lot of graphite, where neutrons tend to have a lot of collisions (i.e., more opportunities to get bit by floating point non-associativity). That’s probably what you’re seeing in the HTGR problem.
Hi,
Thanks for getting back to me. This would seem related to the issue of how weights are carried forward. Would the reproducibility then be sensitive to the population control mechanisms?
Yes, reproducibility could be sensitive to population control mechanisms as well, although we don’t have many population control mechanisms at present (mainly survival biasing). There is an open pull request for adding weight windows for variance reduction.