Photon flux with NaN value

Hello all,

I am having an issue in a shielding simulation. My model consists of different cylinders, each one filled with a different material (see figure). In the center, I have positioned two point sources, one neutron source and the other is a photon source. As I have both energy spectra, I am using openmc.stats.Tabular to inform them (following the recommendations in this post). I wish to tally the fluxes, then I created a mesh (101 x 101 elements) and particle filters.

image

The mesh has 10 201 elements. Then, the number of particles (200 000) divided by the total number of mesh elements is around 20 (following the recommendations in this post). The simulation went okay. I tried to load the results by using:

sp = openmc.StatePoint(‘statepoint.300.h5’)
tally_flux_photons = sp.get_tally(name=‘Half Domain Photons’).get_pandas_dataframe()

but in the ‘tally_flux_photons’ DataFrame the ‘mean’ column does not appear. Then, looking at the ‘tallies.out’ file, I realized that I am getting a photon flux equals to ‘NaN’. However, when doing the same simulation using 25 000 particles l do not have this issue. The number of batches is 300 in both cases.

Here are the two ‘tallies.out’ files.

Please, someone could tell what is causing this issue?

Thank you,
Maria

Hi @MaryAAZ. This sounds like it could potentially be a bug in the code if you are getting NaNs. Are you able to share the model or a simplified version that shows the same behavior? Unfortunately with just the talles.out files, there’s not much I can do to diagnose the problem.

Hello @paulromano. Thanks for your reply. Unfortunately, I cannot share publicly the files. I sent you a message with the xml files. Please, let me know if you need more information about my model.

Thank you,
Maria

Thanks for sharing the model Maria. I was able to track down the issue and have a fix submitted for review. Unfortunately I don’t have a good workaround for your model; the best I can suggest is to install the developmental version of the code once the above fix has been merged.

Thank you @paulromano

Hello all,

Thanks @paulromano. I was able to run my simulations in the development branch and I did not get ‘NaN’ values. My simulations are with 500 000 particles and 300 batches.

I have a few questions:

  1. The number of particles divided by the total number of mesh elements (101x101) is around 49. Should I adjust the number of particles or the number of elements in the mesh to obtain a ratio ~20, or I can keep the settings previously mentioned?

  2. I would like to know how the software treats a problem where two point sources (neutron and Gamma sources) are defined, and the photon transport is activated?

  3. I created a mesh in 2D (model in 2D) to define a mesh filter. For example:

    mesh = openmc.RegularMesh()
    mesh.dimension = [50,50]
    mesh.lower_left = [0,-100]
    mesh.upper_right = [100,100]
    mesh_filter = openmc.MeshFilter(mesh)

    Then, loading the statepoint and getting the tally by using:

    sp = openmc.StatePoint(‘statepoint.300.h5’)
    tally = sp.get_tally(name=‘Tally’s Name’).get_pandas_dataframe()

    Looking at the DataFrame I see that there is one element in the z-direction. What is the size of that element since the mesh was created in 2D?

Thanks in advance,

Maria

Hi Maria. To address your questions:

  1. That recommendation of 20 particles per mesh element mostly has to do with the source distribution for a k-eigenvalue calculation, as opposed to tally results on a mesh. In your case, the number of particles that you should run is really dictated by the level of uncertainty that is acceptable to you for these tallies. If the uncertainty is too high, increase the number of particles. Generally speaking, decreasing the standard deviation by a factor of 2 will require a factor of 4 more particles (the variance decreases as 1/N).
  2. Tally results are always normalized “per source particle” so if your source distribution contains two separate sources, you’ll have to account for the relative source strength of these two sources. Assuming the strength of the neutron and photon source were the same, this means that if you wanted tally results per source neutron or per source photon, you’d need to multiply the results by two.
  3. The one element in the z-direction extends infinitely in both the -z and +z directions.

Hi Paul. I am a little bit confused with your second answer.

My source distribution does contain two separate sources (neutron and photon sources). Each source has the same strength (1, strength value by default).

Defining the sources:
neutron_source = openmc.Source()
neutron_source.space = openmc.stats.Point(xyz=(0.0,0.0,0.0))
neutron_source.energy = openmc.stats.Tabular(NE,NP,interpolation=‘histogram’)

photon_source = openmc.Source()
photon_source.space = openmc.stats.Point(xyz=(0.0,0.0,0.0))
photon_source.energy = openmc.stats.Tabular(GE,GP,interpolation=‘histogram’)
photon_source.particle = ‘photon’

settings.source = [neutron_source,photon_source]

I am tallying the neutron and photon fluxes using particle and mesh filters. Then, loading the results as follows (ne is the number of mesh elements that is the same in x and y directions):

sp = openmc.StatePoint(‘statepoint.300.h5’)
flux_neutrons = sp.get_tally(name=‘Neutrons’).get_pandas_dataframe()
flux_neutrons = flux_neutrons[‘mean’].values.reshape((ne,ne))

flux_photons = sp.get_tally(name=‘Photons’).get_pandas_dataframe()
flux_photons = flux_photons[‘mean’].values.reshape((ne,ne))

If I understood correctly, these results are normalized “per source particle”. Then, multiplying the neutron and photon flux values by 2, I will get the results normalized “per source neutron” and “per source photon”, respectively.

neutrons_flux = 2*flux_neutrons

photon_flux = 2*flux_photons

After that, I can transform the flux units to conventional units using the equation described in a previous post considering the neutron and photon source intensities with the respective flux values.

Is this the correct procedure?

Thanks,
Maria

Please @paulromano, do you have any comments on this?

Thank you,
Maria

@MaryAAZ Sorry for the delayed response, just catching up on all the messages here. Yes, what you described looks correct to me. One sanity check you can do is to set the source for your problem to be only neutrons or only photons and confirm that the results that you get are consistent with the case where you have both sources and results multiplied by two.

Hi @paulromano. Indeed, I had done that test you mentioned. I did a case with only the neutron source and a second case with both sources (500 000 particles and 300 batches in both cases). Comparing both results, I see that they are almost the same. I was expecting that the neutron flux obtained using two sources was going to be half of the value obtained when just one source was used. Am I doing something wrong here or missed something?

I sent you an excel file with the results. Please, I appreciate if you can take a look.

Thanks,
Maria

Hi @paulromano. I forgot to mention that the results were obtained using a mesh filter (101 x 101 elements).

Maria

Sorry for misleading you here! The multiplication by two would actually only be needed if the total source strength were 1.0. However, when you add two separate sources (one neutron and one photon), each of them has a source strength of 1.0 by default, so the total source strength is 2.0. Said another way, introducing a second source doesn’t change the strength of the first source, so the flux of neutrons you get is the same even if you add in a separate photon source. So, in you case, the tally values are already normalized correctly according to what you want.

Hi @paulromano. Thank you very much for your reply.