Depletion/decay flux tallies inconsistency

Hi all,

I’m trying to get both neutron and photon dose rates for a full fission core both through life as supercritical and then after shutdown (turning the control drums inwards to make it subcritical). The current method I’m using is to do the initial depletion in ‘eigenvalue’ run mode for a certain amount of years, then go into the model geometry to rotate the drums. Then I continue the depletion for another few years with the altered model after giving it the altered materials from the depletion_results.h5 file, but in ‘fixed source’ run mode, with the normalisation_mode = ‘source_rate’ in the operator and the source rates set to 0 in the integrator
to get decay only steps.

The tallies I’m looking at are just flux tallies at the core boundary and slightly further away, and I get good results for each of the initial supercritical depletion steps for both neutrons and photons in the 'openmc_simulation_nX.h5 ’ files. However, when I look at the tallies for the fixed source continuation depletion, there are no results. The ‘tallies.out’ file also isn’t changed. The tallies are there in the altered model xml files the same as in the initial model xml, and the tallies are present in the statepoint file but don’t seem to actually be tallying any events, e.g. below is a tally in the statepoint at stage 6 (initial depletion) and then stage 16 (second depletion) missing results.

(openmc15) > h5ls openmc_simulation_n6.h5/tallies/tally\ 1
estimator                Dataset {SCALAR}
filters                  Dataset {3}
n_filters                Dataset {SCALAR}
n_realizations           Dataset {SCALAR}
n_score_bins             Dataset {SCALAR}
name                     Dataset {SCALAR}
nuclides                 Dataset {1}
results                  Dataset {800, 1, 2}
score_bins               Dataset {1}
(openmc15) > h5ls openmc_simulation_n16.h5/tallies/tally\ 1
estimator                Dataset {SCALAR}
filters                  Dataset {3}
n_filters                Dataset {SCALAR}
n_realizations           Dataset {SCALAR}
n_score_bins             Dataset {SCALAR}
name                     Dataset {SCALAR}
nuclides                 Dataset {1}
score_bins               Dataset {1}

I’m not sure why this is - I’ve tried separating out the depletion into two scripts and recreating the tallies for the 2nd depletion, but that’s not worked.

Any help or suggestions on the method I am using are appreciated!
Thank you

Hi Eanuter,
It might not be answering your question, but have you tried to change the power to 0 when doing depletion? From the documentation, power can either be float or iterable of float e.g. [3200, 3200, 0, 0, 0, 3200, 3200, …], so instead of stopping your depletion calculation and then rerun it with a fixed source, you can use your depletion calculation for all your calculation with or without thermal power. I hope it could simplify your calculation.

Hi wahidluthfi,

Thanks very much for your reply and suggestions. I have tried this method for getting shutdown photon doses in a similar way to the method in one of the workshop examples, which does produce tally results. I’m not sure how to adapt this for neutron doses though.

For the neutron doses, I’ve tried running the depletion as one calculation, but due to the reactor being keff>1 at start of life, I cannot used a fixed source runmode as it gives the error of the secondary bank of particles being too high, and if using eigenvalue runmode I’m then unsure on how to normalise the tallies given a 0 power. Also I’m not sure how I would go about changing the geometry mid-depletion to give the keff<1 of shutdown. Is there a way to to change settings/geometry mid depletion calculation without stopping it?