Discrepancy in Eigenvalue When Using Multiple Fissile Universes in SFR Core Lattice

Hello everyone

I’m currently facing an issue in my OpenMC simulation and would appreciate your valuable insights.

I’m working on using machine learning to optimize the fuel loading in a sodium cooled fast reactor. As part of this, I needed to define 150 unique fissile assembly universes, each with its own distinct fuel material.

Here’s what I’m observing:

  • When I configure the core lattice using just one of the already-defined fissile assembly universes, I get a combined k-effective of 1.12911.
  • However, when I populate the lattice using all 150 unique fissile assemblies, the resulting k-effective drops to 1.07709.

Other reactor parameters remain unchanged between these two setups. The only difference is whether the lattice uses a single repeated fissile universe or 150 distinct ones.

This results in a ~4.6% difference in eigenvalue, which seems significant. I’m wondering:

  • Is this level of deviation expected when using many different fissile material definitions, even if they are structurally identical?
  • Could this be due to how OpenMC handles memory/material tracking or cross-section caching?
  • Or is it more likely that there’s an error in how I’m defining or assigning the fuel materials to each universe?

I’ve attached the relevant XML input files for both versions of the model.
geometry(One Fissile Universe).xml (27.1 KB)
materials.xml (115.5 KB)
settings.xml (1.1 KB)
geometry(150 Fissile Universe).xml (708.5 KB)

Thanks in advance for your help

Hi Niloy,
Can you provide your entire file, or at least the part where you are applying the material to a cell, it is very difficult to diagnose off of just the geometry xml alone. There shouldn’t be any difference in the k-eff unless you performed a depletion beforehand as long as everything else remains the same. I have done the exact same thing you have done and it was the same k-eff for me. I quickly checked you xml by ctrl f and searching for different values and it seems like it is the same at least up to the assembly sections, whereas I cannot tell if they have been constructed properly as the regions will not align between assemblies. If there was a problem I would expect it to be here. Lastly, make sure that if you are running from xmls that there is not a model created from some function in the same folders, I have made that mistake before without realizing it when I had a Boron criticality search function creating a model. Openmc will prioritize models over xmls where present.

Hi Niloy,
I used geometry debug mode openmc -g and found that some of your cells were overlapping. I think you can check it first.


i.e: this error reports that your cell 1364 with name “main_in_assembly_105” overlaps with cell 1366 named “outer_sodium_105”. However, I think this happened because you were not using ~ (outside region) properly for your region definition.
Instead of saying that your geometry was “inside hexa2, between axial region 2 but outside region1”, which region1 consists of “inside hexa1 and between axial region 1”, you can also say that your region is “(inside hexa2, outside hexa1 and between axial region1) and && (inside hexa2, outside axial region 1 but inside axial region 2)”.
Both method should run well but with this method, we can prioritize which region we need to declare first.
The same thing also applies if you use another hexa region, which is larger than hexa1 and hexa2. Let’s say this hexa region is hexa3. Since hexa1 is the smallest one, then you can define your geometry into “inside hexa3, outside hexa2, and some axial region”, no need to mention hexa1 such as “inside hexa3 but outside region2” because the region2 has hexa region1 which can cause some conflict to the whole cell definition.

I think you also need to check your “assembly_sleave_105” later

Hope it could help you localize your geometry problem

Thanks, mate.
I did have some overlapping cells.
That fixed the problem.

1 Like