Summary
I have a very simple Godiva bare-sphere model (HEU metal, radius ~8.71 cm, density 18.74 g/cm³) written in Python using the OpenMC API.No matter what I set for settings.batches, settings.inactive, settings.particles, settings.no_batch_limit = True, settings.trigger_max_batches = 1000, or settings.trigger = None, the simulation always stops after exactly 50 batches and writes statepoint.50.h5. The final combined k-eff is printed, but only ~30–40 active batches are performed (depending on inactive count).Key observations so far
-
Maximum batch is always 50
-
The last batch line is always 50/1 …
-
Statepoint file is consistently statepoint.50.h5
-
No error message — simulation finishes normally and prints results / timing statistics
-
-
Same behavior in multiple versions
-
OpenMC 0.14.0 (conda-forge binary)
-
OpenMC 0.15.3 (previously compiled from source in a different env)
-
-
k-effective estimate at batch 50 is identical
-
Example output at batch 50:
50/1 0.82559 0.84084 +/- 0.00378 -
This value does not change even when I increase settings.particles from 10,000 → 100,000 → 500,000 or change inactive from 20 → 100 → 200.
-
Final combined k-eff is always ~0.84076 ± 0.00281 (at batch 50)
-
-
Changing parameters has no visible effect on early batches
-
Increasing particles per batch, inactive batches, or max batches does not make the simulation continue past batch 50.
-
The printed k and uncertainty at batch 50 remain exactly the same across all tests. Questions
-
-
Is there a hidden/internal convergence criterion in 0.14.0 that cannot be disabled via Python API?
-
Is batch 50 a hard-coded limit in some conda builds or older versions?
-
How can I force OpenMC to run the full requested number of active batches (120+) without early exit?
-
-
-