Fixed source supercritical problem

Dear all,

I am developing a verification method for subcritical experiments. The goal is to be able to verify that a subcritical experiment stayed subcritical and was not supercritical.

To test the feasibility of this method, I am modeling a small sphere of plutonium in a fixed-source problem with OpenMC. The fixed source represents the neutrons that irradiate the compressed sphere in a very short pulse. I tally the outcoming fission neutrons and photons current through a surface as well as the fission rates for different time points (with the time filter). When the mass is subcritical, we see a clear decrease in these three variables over time (Figure_1). However, when the mass is slightly supercritical, the variables don’t increase as one would expect, stagnate, and decrease over time (Figure_3).

Figure_1
Figure_3

I wonder whether OpenMC is actually able to model fixed source problems for supercritical systems. Or is this abnormal evolution of the variables for my supercritical experiment due to some other errors in my model?

Thanks a lot for your help! I am happy to provide more details if that can help in dealing with this issue.

Julien

what are the keff values for the configurations?

Hello!

Thank you for your question.

When running a criticality search for the same two configurations as discussed in my original post, I get keff = 0.578 for the subcritical configuration (first figure) and keff = 1.302 for the supercritical configuration (second figure).

Some additional aspects that I found out and which don’t make sense. As we see in the second figure for the supercritical configuration, the neutron and photon numbers end up stagnating during the first 1000 nanoseconds (the simulation has setting.cutoff sets to 1000 nanoseconds). However, if I increase this cutoff to 1768 nanoseconds, then I get the Secondary particle bank appears to be growing without bound. You are likely running a subcritical multiplication problem with k-effective close to or greater than one error. This means that the code indeed has the photons and neutrons increasing for this configuration. But if this is the case, we should at least see the variables starting to increase for the first 1000 nanoseconds. Even when setting the cutoff as close as possible to 1768 without getting the error (e.g. 1767 ns), the variables still show stagnations. I doubt that the fission rates suddenly increase only at 1768 ns and not before.

Thanks a lot! Any help or tips are welcome.

Julien

Julien, sorry, I cant imagine the geometry of your model, but are you keeping the heavy metal mass into 3000 while changing the HM density from 15 to 60? I don’t know the dimension on these parameters that you shown on the plot, but I think by using higher density, then you will decrease your volume (smaller geometry) to achieve same amount of mass. this smaller geometry will have some drawback on neutron leakage since you are not using reflective boundary condition and the fission neutrons are so fast that its position after being born are outside of HM mass. is it applicable for your case scenario?
Also, I didn’t know that openmc could be use on this kind of time dependent calculation, I hope you could give us some glimpse on the inputs for how it could be done, since this kind of simulation will be good for describing the physical phenomenon in class, i.e. prompt neutron jump, or the changes of neutron population during step/ramp reactivity insertion related to RIA.
thanks Julien

Hello all,

@wahidluthfi thank you for your reply and interest in the problem.

I have continued working on this issue and it seems I have identified a bug or a unexpected behavior of OpenMC.

The baseline of my findings so far is that OpenMC fails to properly model supercritical systems for coupled neutron-photon simulations while it has no problem doing so with neutron-only simulations.

Here are some details on why I came to this conclusion.

I ran a fixed source simulation with a ball of plutonium-239 and looked at some tallies such as prompt fission neutrons, and currents of neutrons and photons just outside the ball. The source is a “dirac” that only provides neutrons at time t=0. I also set a time cutoff at 1E-7 seconds and use a time filter for the tallies from 0 to 1E-7 seconds with steps of 1E-8 seconds. I also changed the source code so that the code would not run into secondary bank overflow for supercritical configurations (I changed line 116 in physics.cpp “if (p.secondary_bank().size() >= 10000)” where replace 10000 to 1000000). The input file is attached at the end of this post.

When I take the radius of the plutonium-239 ball to 4.5 cm (r_pu = 4.5), the configuration is subritical (a criticality search would yield keff ~ 0.9). The fixed-source simulation for this configuration yields results that are expected. The neutron’s production and the currents of neutrons and photons just outside the sphere drop pretty fast (1st figure). Note that I can also deactivate the photons (setting.photon_transport = False) and I will have the same results with, of course, photons current to zero as they are not produced anymore (2nd figure).

figure_1
figure_2

Now, problems arise if I use a supercritical configuration. Let’s take r = 7 cm which makes the ball supercritical with keff ~ 1.3. If I run the fixed-source simulation with deactivated photons, no problem, the tallies seem to behave as expected. The neutrons production and current grow exponentially (3rd figure). Now, if I activate photons (setting.photon_transport = True), the tallies do not make sense at all! Somehow, they stagnate and decrease just before the time cutoff (4th figure)

figure_3
figure_4

I have also tried running the same supercritical configuration with photon_transport = True but where I remove most photons (by setting an energy_cutoff for photons). The interesting thing is that there seems to be a gradual shift toward the correct behavior of the tallies when I remove more and more photons. See the last two figures.

figure_5
figure_6

The last point I want to raise is that even when the tallies seem to stagnate while they should grow exponentially, the code would run into secondary bank overflow if I had not changed line 116 as described above. This seems to mean that the code does produce a growing number of particles in coupled neutron-photon mode but that the tallies somehow show the wrong results.

I am curious to know if someone has an idea of what could explain this apparently inconsistent results.

Thanks a lot!

Julien

Input file:
run.py (4.0 KB)

1 Like

Thank you, Julien, that’s good work to localize the problem, and sorry I didn’t have any suggestions in this case since I also thought that the photons and neutrons would exponentially increase in a supercritical medium bombarded by neutrons.
Also, a long time ago ( 2016 Question on openmc fixed source calculation - #2 by paulromano ), Paul said that in fixed source calculation, the cases were treated as subcritical multiplication problems which as you also mentioned, in supercritical problems will overflow the secondary particle bank.
thanks for your great explanation. I hope the developer can respond to this discussion to give some clarity.
Also, great work on ONIX, I am modifying some lines of your code locally to make it compatible with the newer version of openmc.
Thanks Julien,
Wahid

I also noticed that when using the supercritical configuration of your input (R=7.0cm), the calculation time for coupled neutron-photon transport was done faster than if it only used neutron transport (settings.photon_transport = False).
With 1000 particles and 10 batches, when photon transport was used, it only needed around a minute of simulation, but when I didn’t use photon transport it needed around 1.5 hours. Some background calculations might be suppressed when photon transport is activated or when tallying photons as you said before.

@Julien I just submitted a bug fix for this problem:

2 Likes