Firstly, what do they mean and i assume the more you have the more accurate the estimates will be to the actual value? What ratio should the three be in?
I’ve tried 100 for particles, 10 inactive and 50 batches and also 1000 particles, 50 inactive and 100 batches and the results were varied but running 1000 particles took way too long.
you may wait for someone with more experience,
but as far as I understood, the accuracy of your results (along with the computational cost of the calculation) depends roughly by the product batches x n_particles, . In the scenarios you suggested, the first one should return you a less accurate result than the second one for some reasons.
a) for each batch you have less particles, and for what I experienced, given a value of batches x n_particles a calculus with higher number of particles per batch gives you a more accurate result and in less time than one with less particles per batch, but more batches.
b)in the first scenario, you have 10 inactive batches, while in the second one, 50. I have not fully comprehended the thing with inactive batches, they are useful in source convergence problems: basically, when you study a k-eigenvalue problem, the code firstly iterate until the source distribution does not depend anymore from the initial guess. The inactive batches should solve the problem in this initial phase, and after that, tallies will be started to be stored. But I have done some tests, and I can’t appreciate the difference between a calculation where you specify the inactive batches and another one where you don’t.
thank you so much. That’ makes sense to me. I have played with a reference benchmark i.e a simple pwr reactor and now wanna implement that into a fast reactor like the sodium fast reactor so just gathering all the data. I still need to learn about the hexagonal lattice in openmc and in general before i can move forward.
Hi, that’s a very common question, even for experienced users. The number depends on the size of your problem. For good statistics, you want to converge your result to about 20 pcm (you want your final average k +/- 0.00020). For a single, full-length PWR fuel rod, I typically use 1 million particles per batch for really good results.
The inactive batches allow the source to decouple from your initial source distribution. I don’t have any concrete recommendations for you, but I typically use 200 batches, with 100 inactive batches to be safe. You can use fewer inactive batches if your initial source distribution is closer to the expected true source distribution.
With any Monte Carlo code, you will need to run a large number of neutrons for good statistics, and this will take a very long time. It’s one of the downsides of Monte Carlo in general. I suggest you run OpenMC in parallel to speed up the calculation.
Hi, Thank you very much for your detailed answer. It makes sense to me. Currently, i have been using 1000 particles which is nothing compared to 1 mill that you use. When i do 1000 it takes really really really long compared to when i use 100. I’m not sure i will need 1 million but i think i should try something like 10,000 for more accurate representation.
My problem is i don’t really know how to use openmc in parallel. I’ll look into it. Thank you very much again.
if you are talking about to use all of your PC cores, you can easily do it with openmc.run(threads=8), and if you keep reading, you can find more even for multiple node clusters
Hi @mkreher13,
I am running calculation with fixed source, for a quarter of cylindrical fusion device thick a couple of meters; since I am running with 10batches of 10e9 particles I still get neutron flux values with accuracy of 10% or so. I know it depends a lot of the materials used to create the model. I have done some tests, with way less particles, to test the accuracy, but I didn’t test up to such number of particles. In your opinion, would you think that there is something to be checked, or that is reasonable to get such low accuracy values of flux considering the geometry of the model?
@tony_emme,
Large geometries are difficult to model since a very large number of particles is needed to attain good accuracy. I think there are probably no errors in your model, just need to get that accuracy up by changing the settings. Why don’t you try running in “tally trigger” mode? This allows you to run for as long as you need until you hit the desired accuracy on a certain tally. It also gives you an estimate of how many more batches are needed to hit that target.
Here is an example: MGXS Part II: Advanced Features — OpenMC Documentation
In the settings, specify:
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4 (for example)
Use openmc.Trigger() to specify the trigger parameters.
After you create the flux tally, add the trigger. For example:
trigger1 = openmc.Trigger(‘std_dev’, 0.01)
tally1 = openmc.Tally()
tally1.scores = [‘flux’]
tally1.triggers = trigger1
Note also: from what I understand, fixed source calculations make it difficult to get enough particles in all of the regions of your geometry. You may be under-sampling certain regions. Sometimes people “force” neutrons into certain regions to avoid this.
Just wanted to chime with a rule of thumb for particle population choice. It’s not always true, but usually is.
If you want to reduce your error by a factor of ten, you need to increase the total number of particles ran by a factor of 100.
This [1] is some good background reading on picking the right amount of particles for a given simulation. Unfortunately with eigenvalue mode, there are some subtleties that complicate error estimation [2].
@tony_emme For these types of problem, you really need to use some kind of variance reduction method (weight windows, etc.). There hasn’t been a lot of effort put into such features for OpenMC yet, although there is an open pull request with initial support for weight windows based on a superimposed structured mesh. Hopefully a future version of the code will enable much quicker calculations for these deep penetration problems!
Very good discussion from all on setting the number of particles/batches. We ought to expand on this in the user’s manual so that we have a clear reference to point people to.