Hello everyone,
I am Shurabil.This is my second post today.I didn’t think it would be appropriate to include this in my first one.
1)So,today I wanted to understand when I would consider the source distribution of a model converged.For this,I ran a simulation for 100,000 particles per batch and 250 batches.I plotted the entropy and keff with matplotlib and I noticed that the entropy was flat but keff was fluctuating too much.Have I not taken enough particles per batch?Have I run the simulation for too many batches?Is the large number of batches responsible for the fluctuations in keff??I have read that if I take too many batches then error will propagate from the previous batches to new ones.Thats why i am asking all this.
Attachment: https://drive.google.com/drive/folders/15bwGH0vyDmI4gUhhtUlIG6Pe_l6uJ7xx?usp=sharing
2)I want to know something else as well.I have read that in order to converge the source to steady state distribution large number of particles per batch is required.In order to remove the dominant term from the source distribution in large reactors,the simulation has to be run for many batches.I want to know how the people who run simulations optimize the number of batches and number of particles when they run a simulation???
3)I also want to know if its possible to run a simulation for different number of particles per batch.I mean is it possible to run a simulation for low number of particles per batch at first and then increase it little by little after few batches???
Thanks in advance
Responding to each point:
- The random variation in keff from batch to batch is expected. The important thing is that once you reach active batches, assuming your source is converged the average of keff over the active batches should converge, i.e., the uncertainty on the mean will decrease as the simulation progresses.
- The convergence of the source distribution is really affected by the number of batches, not so much the number of particles per batch. Once you see that the Shannon entropy is not changing much from batch to batch (i.e., there is no clear trend in either direction), that is a good indication that the spatial distribution of the source has converged. If you run more particles per batch, this means that you essentially have better spatial fidelity in your source and as a result, it may actually take a bit longer to converge the distribution than if you had used fewer particles. Another way of thinking about this is that as you have more particles per batch, it takes longer to get to a point where the stochastic noise is the source is the dominant source of error (as opposed to a systematic error from the initial source guess).
- This is not currently possible in the main version of OpenMC, although @amandalund did do a study a while back looking into this as a possible way of accelerating convergence of the source.
2 Likes
Thank you Paul for your reply.I still have some doubts regarding this source convergence.When I look at the plot of shannon entropy vs number of batches for the file I have attached,it looks to me that it didnt even take 5 batches for the source to converge.I would like to know your opinion on this matter.What do you think?How many batches did it take for the source to converge in my case???
That line does look really flat! You may want to increase the number of mesh elements in each dimension. If the mesh is too coarse, it’s not really giving you much information about how the source distribution is changed. In the example model you shared, you have a mesh with dimension (6, 6, 4), or 144 total mesh elements. With 100,000 particles per batch, this means that each mesh element will contain about 700 particles on average. Generally, you want this number to be a lot lower (~20). Also, because your model is effectively 2D (no change in geometry in the z-direction and fully reflective), I would recommend putting only one mesh element in the z direction and the remainder in the x and y directions. So, if you want 20 particles per mesh element on average and you’re running 100,000 particles per batch, you could try using a mesh dimension of (70, 70, 1). Or, because you have a 17x17 assembly, a mesh dimension of (68, 68, 1) might make more sense.
Now, that being said, your model is a single 2D assembly. I generally would expect a model like this to converge very fast. Neutrons will redistribute themselves over the assembly within a few batches. If you were running a full core model where it takes longer for the neutrons to propagate through the core, it would be a different story, but in this case it’s likely safe to use some nominal amount of inactive batches (20?) and call it a day.
2 Likes
Thank you very much for your help