Depletion question

Hi All,

I ran a simulation of pin power of a 17*17 assembly. It takes several minutes to finish a simulation with 1000 batches and 4000 particles per batch. But when I run a depletion of the same model in “energy deposition” mode, it takes about 2 hours to finish a state point.
I’m not familiar with the methodologies of depletion. Why it takes much longer time?
Then I tried casl pwr depletion chain instead of endfb71. This option halves the computation time, but still time-consuming.

Does energy deposition mode have much influence on time? I tried fission q mode and set fission-q manually ({“U235”: 2.02E+02,“U238”: 2.12E+02, “Pu239”: 2.11E+02,“Pu240”: 2.14E+02,“Pu241”: 2.14E+02}). The simulation exited after calculating the first state point and threw the following error:
`

No energy reported from OpenMC tallies. Do your HDF5 files have heating data?

An exception has occurred, use %tb to see the full traceback.

SystemExit: 1

`
How to solve this problem?

Thanks in advance,
Yue.

1 Like

Hi Yue,

The time difference is caused by the different numbers of nuclides loaded during the initilization.
For example, in pure neutron transport, only the nuclides you added in your materials.xml will be loaded. However, in the case of depletion simulation, all nuclides in the chain file with neutron-induced reaction cross section in cross_sections.xml will be loaded. And the cross section lookup (increases with numbers of nuclides) takes significant portion in Monte Carlo particle transport. For sure, you could try the simplified chain file you mentioned with MPI or OpenMP parallel computation to improve the efficiency.

Anyway, I prefer 100 batches and 40,000 particles per batch in this case.

As for the fission-q problem, it might be caused by the mistake in unit (eV). But I am not sure unless you provide more details of the model.
https://github.com/openmc-dev/openmc/blob/develop/openmc/deplete/operator.py

Best,
Jiankai

To add on to what Jiankai said – if you are using the “energy deposition” mode for depletion, you need to make sure you are either 1) using the “official” data library from openmc.org or 2) generated a library yourself with openmc.data.IncidentNeutron.from_njoy. When libraries are generated for OpenMC, we put special heating data in the HDF5 that won’t exist if you just convert existing ACE files. If this special heating data does not exist in your data library, it will result in the error message you are seeing.

Best regards,
Paul

Thank you, Jiankai. very helpful

在 2020年7月14日星期二 UTC+8下午9:25:23,Jiankai YU写道:

Hi Jiankai,

Could you tell me how to determine the numbers of batch and particles in OpenMC simulation? Thank you!

Best,
Yahui Wang

Some general advice:

  • Inactive batches — this is determined by how long it takes the source distribution to converge. For many problems, 100 or less should be sufficient but if you are simulating a full core reactor or a spent fuel storage pool, it may take significantly longer to converge. The Shannon entropy metric gives you a way of determining whether you used enough
  • Particles per batch — Basically, you need enough to avoid bias in k-eff and tallies (> 10k generally). If you are running a large parallel job with many MPI processes / OpenMP threads, you will also want to make sure that each thread has enough work to avoid load imbalances.
  • Active batches — The overall stochastic uncertainty in the results will be determined by the product of the number of active batches (total - inactive) and the particles per batch. If you do a run and then decide you want an uncertainty that is twice as small, you will need to run 4 times as many active batches (or 4x particles per batch).
1 Like

Hi Paul,

Can you give some advice about how to determine number of generations? If I understand correctly, “problems of high dominance ratio” mentioned in documentation means those models whose dimension are much larger than the mean free path of neutrons. Thus if I want to simulate a full core, I need to group multiple generations into a batch to reduce underprediction of variance, right?
However, what’s the side effect of specifying multiple generations in a batch?
Is “10 batches * 5 generations * 10000 particles” always better than “50 batches * 10000 particles”?
Is convergence of sources affected by setting multiple generations in a batch?

2 Likes

With respect to source convergence, the only thing that matters is the total number of generations. For example, if you expect your source to converge in 100 generations and you were using 10 generations per batch, then you should have 10 inactive batches.

2 Likes