Tally Units, Material and Cell Meshes, Lethargy

I’ve made several posts on here asking for clarification on results and I appreciate the responses from the users. I have some more questions regarding outputs and data manipulation.

First off, what are the reported units for the “Fission Rate” tally? Am I correct in assuming that it would be in fission per source, so only the normalization factor “f” in the methodology would be required? Or is the fission rate also divided by volume? Is there a comprehensive list of the reported units for all tally options somewhere I can review?

In this analysis, do we process the sum of the tally or do it by mesh? Because when the heating tally score over the whole system was used by mesh we ran into divide by zero errors.

Lastly how do you define energy binning structure? Say I wanted to specify a specific material or cell in the OpenMC model, followed by equal lethargy bins for energy ranging from 1e-11 MeV to 20 MeV, and find the flux or any other tally for that material or cell with the only bins being energy level? I can provide Serpent equivalents to what I mean, since we’re using OpenMC as a benchmark to other Monte Carlo reactor physics codes I want us to be as consistent as possible.

Please let me know if I need to clarify anything, I appreciate any input.

Regards

Hi,

I’m not sure about all the units but some of them are here.

https://docs.openmc.org/en/latest/usersguide/basics.html#physical-units

Also some are here

https://docs.openmc.org/en/stable/usersguide/tallies.html#id2

The second link also has a nice discussion on normalisation of tallies.

Additionally I have a unit convertor package for OpenMC that might be of interest, it is no where near comprehensive but perhaps slightly better than nothing

In general I don’t believe tallies in OpenMC are divided by the volume automatically. However there are volume meshods on some meshes and stochastic volume meshods if you want to obtain the cell volume. The tally unit convertor has the option to convert using volume if the required units are the base units per unit volume.

For the spectrum tally you can make use of any list of energy bins but might want to consider these standard energy structures

openmc/openmc/mgxs at develop · openmc-dev/openmc · GitHubinit.py

For an example of how to use these take a look at this minimal example

All the best

Jon

Thanks for the input, after reviewing all these units and applying the process to our own tallies, we’re still off from our benchmark comparisons by 1e6 degrees of magnitude. When we multiply our results by the number of source particles in addition to the normalization factor, we get plots that look nearly identical. Can we just multiply say a fission rate tally by the number of source particles used in the simulation and call it a day? If we cannot do so, why not and how should that process look different from the flux process that’s discussed?

@rherner What is your point of comparison? I can’t think of a situation where it would make sense to multiply by the number of source particles. All tallies in OpenMC are reported “per source particle” which is what you want before applying a [source/sec] normalization factor.

The problem we are running into is that when we apply the normalization factor using the process defined in the documentation, we have about 1e6 degrees of magnitude difference between the Serpent results and the OpenMC results, when the geometries are the same. We’re talking about a maximum flux value in our color plots of flux for Serpent of about 1e16 n/cm^2 sec, while our OpenMC flux values get to about 1e10. We ran 1.5 million particles for these simulations, and when we multiply the normalization factor by the number of particles the flux becomes nearly identical in the corresponding flux maps. That’s why I believe there’s a potential issue in how the tally normalization is being done. It could entirely be an issue on our end that the process defined in the documentation isn’t being translated to our code right. If that’s something y’all would be able to help us troubleshoot as well I’d appreciate it. We just don’t know why this has such a massive discrepancy

If you’re able to share some code to demonstrate, that might be helpful (particularly in how you’re calculating the normalization factor). A few other thoughts come to mind which may or may not be helpful:

  • Tally results should converge to a single value as you increase the number of particles. So, if you were to run 10 million particles, your tally results should be about the same (obviously with lower standard deviation)
  • OpenMC uses units of eV where energies are concerned (Serpent uses MeV). If you’re off by exactly 1e6, that might be something to consider.