Impact of RNG on search_for_keff

One other thing I’m wondering about is how the two initial guesses are provided – to scipy – for the secant method. It looks like when you don’t provide a bracketed method, openmc says that a secant method is used. It calls scipi.optimize.newton and provides one initial guess. From openmc/search.py

    elif initial_guess is not None:

        # Generate our arguments
        args = {'func': search_function, 'x0': initial_guess}
        if tol is not None:
            args['tol'] = tol

        # Set the root finding method
        root_finder = sopt.newton

    else:
        raise ValueError("Either the 'bracket' or 'initial_guess' parameters "
                         "must be set")

    # Add information to be passed to the searching function
    args['args'] = (target, model_builder, model_args, print_iterations,
                    run_args, guesses, results)

    # Create a new dictionary with the arguments from args and kwargs
    args.update(kwargs)

    # Perform the search
    zero_value = root_finder(**args)

    return zero_value, guesses, results

Looking into the scipy docs, the newton function expects either an fprime (to run Newton) or an x1 that does not equal x0 (to run secant). I can’t figure out from the source code is how it provides the second initial guess – which is a requirement for the secant. If anyone knows how it selects the x1 for secant, please let me know. From my results, it seems like my x0 is the initial guess I give and x1 seems to be close by, but I can’t find the actual code that selects x1

I also wanted to talk more about tolerance and the search method.

I think in general you want your tolerance to be the same magnitude as your variance you want from your k-eff. You have to be careful here though as tolerance depending on the solve method is relative or absolute, and therefor the units of your criticality parameter can matter when selecting tol (and it might be desirable to manipulate this before running criticality searches). Therefore, the more precision you are looking for in the k-eff the better the tolerance you should use.

I’m currently doing searches with tol = 0.5. My current statistics are producing eigenvalues in my search with

Iteration 0 had 2200.0 resulting in 1.0098160667672698+/-0.00018721841437050537
Iteration 1 had 2200.2201 resulting in 1.0099607722920139+/-0.00018356704021688818
Iteration 2 had 2185.0695659388603 resulting in 1.011652646796366+/-0.0001883119546961225
Iteration 3 had 2289.4176258786706 resulting in 1.001903863015256+/-0.00020276848159237997
Iteration 4 had 2309.796005017416 resulting in 1.0000511017611478+/-0.00018601420478047472
Iteration 5 had 2310.358069374611 resulting in 0.9998218612551238+/-0.00020064005290132091
The serach resulted in a criticial guess of 2309.9212990984197

It seems like by your recommendations, my tolerance is a lot higher than the variance (0.5 vs (0.00019)^2), but my search seems okay? I’ve also tried runs with stricter tolerances and they seem to have trouble stopping because the tolerance is too tight. Where does this recommendation derive from exactly? Looking at the docs, scipy.optimize.newton has a tol and rtol parameter, but OpenMC passes tol as part of run args, so I’m thinking my specified tol is absolute.