Impact of RNG on search_for_keff

I’m doing some work regarding search_for_keff and was wondering about the impact of floating point error and random number generation. This post is mostly to get other’s input on expectations for the search/suggestions from their own experiences. My curiosity comes from running the same script, with the same search settings, and seeing slight differences in the search results. For example:
first search

python pincell.py --search --tol 0.8
Iteration: 1; Guess of 2.20e+03 produced a keff of 1.00982 +/- 0.00019
Iteration: 2; Guess of 2.20e+03 produced a keff of 1.00996 +/- 0.00018
Iteration: 3; Guess of 2.19e+03 produced a keff of 1.01141 +/- 0.00019
Iteration: 4; Guess of 2.30e+03 produced a keff of 1.00022 +/- 0.00020
Iteration: 5; Guess of 2.31e+03 produced a keff of 1.00027 +/- 0.00019
Iteration: 6; Guess of 2.29e+03 produced a keff of 1.00167 +/- 0.00018
Iteration: 7; Guess of 2.31e+03 produced a keff of 1.00003 +/- 0.00018
The critical boron concentration achieved with a tolerance of 0.8 was 2310.2362875938898

second search

python pincell.py --search --tol 0.8
Iteration: 1; Guess of 2.20e+03 produced a keff of 1.00982 +/- 0.00019
Iteration: 2; Guess of 2.20e+03 produced a keff of 1.00996 +/- 0.00018
Iteration: 3; Guess of 2.19e+03 produced a keff of 1.01143 +/- 0.00019
Iteration: 4; Guess of 2.30e+03 produced a keff of 1.00075 +/- 0.00018
Iteration: 5; Guess of 2.31e+03 produced a keff of 0.99986 +/- 0.00020
Iteration: 6; Guess of 2.31e+03 produced a keff of 0.99992 +/- 0.00019
Iteration: 7; Guess of 2.31e+03 produced a keff of 1.00055 +/- 0.00020
Iteration: 8; Guess of 2.31e+03 produced a keff of 1.00029 +/- 0.00020
Iteration: 9; Guess of 2.31e+03 produced a keff of 0.99994 +/- 0.00022
The critical boron concentration achieved with a tolerance of 0.8 was 2310.714525935255

third search

python pincell.py --search --tol 0.8
Iteration: 1; Guess of 2.20e+03 produced a keff of 1.00982 +/- 0.00019
Iteration: 2; Guess of 2.20e+03 produced a keff of 1.00996 +/- 0.00018
Iteration: 3; Guess of 2.19e+03 produced a keff of 1.01127 +/- 0.00019
Iteration: 4; Guess of 2.32e+03 produced a keff of 0.99934 +/- 0.00019
Iteration: 5; Guess of 2.31e+03 produced a keff of 0.99991 +/- 0.00020
Iteration: 6; Guess of 2.31e+03 produced a keff of 1.00023 +/- 0.00019
Iteration: 7; Guess of 2.31e+03 produced a keff of 0.99993 +/- 0.00017
The critical boron concentration achieved with a tolerance of 0.8 was 2307.5171242229408

It’s interesting that the first two iterations are identical between searches but then start to have differences on guess 3. In practice, if the final value satisfies the search criterium, then it shouldn’t matter how it gets there, but ideally the fewest number of searches is desired. I think playing with the tolerance would for sure be the way to minimize searches for a given number of particles / batches / etc. Looking at the results, each case is essentially statistically equivalent considering confidence intervals, so I guess these differences don’t matter too much in the end.

I think it might be impossible to generate the same search every time due to changes in the random number stream from running in parallel and/or floating point error causing differences in the guess for each iteration. Just wondering if anyone has thoughts on this (especially which things contribute to differences in searches) and/or recommendations. It seems like erring on the side of more particles will produce searches that end close to each other. I could imagine this would be more susceptible to differences with higher uncertainty in each iteration.

I’m also wondering if there’s a way to get the eigenvalue at the critical guess besides re-running the model with that value. It seems the print says

The critical boron concentration achieved with a tolerance of 0.8 was 2307.5171242229408

But it doesn’t report the eigenvalue here. I’ve been running with prints of the solves off, so I’m unsure if it actually does the solve at that concentration or it just stops because of convergence criteria and doesn’t re-run a final critical run (which would be fine).

Hi Ligross,

I have played around a little with search function and you are correct that the better your counting statistics (particles/batches/gens) the less variance between the same criticality searches. If you theoretically could run the simulation with counting statistics that result in no variance, then the criticality search would be the exact same between runs as the floating point rounding is almost guaranteed less than your required tolerance or error in k-eff in normal cases. I think in general you want your tolerance to be the same magnitude as your variance you want from your k-eff. You have to be careful here though as tolerance depending on the solve method is relative or absolute, and therefor the units of your criticality parameter can matter when selecting tol (and it might be desirable to manipulate this before running criticality searches). Therefore, the more precision you are looking for in the k-eff the better the tolerance you should use. From what I have gathered from people, if you are looking at keeping a reactor critical for depletion calculations you can get away with less accuracy, but if you’re monitoring things like the reactivity devices within a reactor you will require a lot better statistics. Also, if you’re adjusting something with significant impact on multiple parts of your geometry it can take a long time for the simulations.

As for whether it completes a last transport, I don’t believe it does for the initial guess method as it displays the transports results if not hidden and there is a lot less time for the last guess to be given. It appears to do a type of averaging between the two bounds it found in the final guesses by assuming a line connecting the closest guesses and choosing a point along it closest to the bound that was closest to criticality. If you do the bracket method though I found that it does actually run the last guess, difference between the secant (initial guess) method and the bisect (bracket) method.

Depending on your specific scenario utilizing the python api could be beneficial to speed up the process of searching as you can dynamically input better initial guesses or brackets. For example, if you were completing a boron criticality search following depletion simulations on a reactor with a constantly decreasing reactivity over the lifespan being evaluated, you could improve the bracket method by dynamically adjusting the upper bracket to whatever the previous critical boron concentration was found as you know that reactivity will decrease. If you are really familiar with reactor effects and can expect a maximum reactivity change between each depletion step you could have the lower bracket be in respect to the upper bracket (lower bracket = upper bracket - X) as long as you know that the difference in k-eff for each step will not be greater than X. Lastly, for long simulations where you are completing many criticality searches (such as simulation the entire fuel depletion process for any reactor), it might be useful to run a criticality search function with low counting statistics (1/10th or 1/20th # of particles of your normal particles for example) until it loosely converges to criticality, and then run another criticality search with initial guess being the previously found value and having normal counting statistics which should run for a few extra iterations to converge closer to criticality.

If the reactor’s reactivity does not continuously decrease (CANDU at beginning of cycle for example actually experiences a bump due to plutonium formation) then the initial guess method would be preferred but you should still have the initial guess dynamically change to whatever your previous criticality boron concentration was. There are a few situations that if you think about it you can optimize the searches, which can make a significant difference in long sims that require repeated criticality searches. Lastly, there are options for the bracket method solver, and I would suggest to switch it to brentq unless you want to really evaluate the differences between the solvers.