Flip_normal() and physics for periodic problems

When working with a periodic boundary condition, OpenMC will warn the user if their periodic angle does not evenly divide into 360.

  angle_ = compute_periodic_rotation(norm1[axis_2_idx_], norm1[axis_1_idx_],
    norm2[axis_2_idx_], norm2[axis_1_idx_]);

  // Warn the user if the angle does not evenly divide a circle
  double rem = std::abs(std::remainder((2 * PI / angle_), 1.0));
  if (rem > FP_REL_PRECISION && rem < 1 - FP_REL_PRECISION) {
    warning(fmt::format(
      "Rotational periodic BC specified with a rotation "
      "angle of {} degrees which does not evenly divide 360 degrees.",
      angle_ * 180 / PI));
  }
}

I’m working on a 1/6th core model, and I got a warning

 WARNING: Rotational periodic BC specified with a rotation angle of
          299.99999999999994 degrees which does not evenly divide 360 degrees.

The compute_periodic_rotation function thinks I hit 300 degrees, which is 360-60, so it must be computing the angle that compliments my slice. I don’t think it’s an issue per se (which is why I’m here and not github). I’m aware of the surface.flip_normal() function, but when I tried that something very strange happened.

The warning goes away, but the physics of my problem is clearly not the same and it did something on a batch I’ve never seen before (below)


In the above image, the left side is what happens when I flip one of my normals. The right side is what happens without that / the normal model (that computes 300 degrees). I did not expect flip_normal to cause this difference in results, so maybe I’m misunderstanding?

I’m not hitting lost particles, but something is very very wrong with the flip normal case. Maybe with a normal in the wrong direction the particle gets into an infinite loop where it gets transported back to where it started and keeps hitting the surface, as opposed to entering the other plane? I’m not sure openmc should fail, but it did have that warning so perhaps nothing needs to happen. I was just wondering if anyone has any explanation for my observation.

Could you provide an example input?

I will try and make a more minimal example that replicates behavior, since this one is a bit complicated, and we aren’t quite ready to share the model script.

I opened a PR ( Fix a bug in rotational periodic boundary conditions by GuySten · Pull Request #3692 · openmc-dev/openmc · GitHub ) that should fix this issue. Please try it out.

Hello again,

Back in December, the merge_surfaces = True seemed to fix my issue, and I was able to run a Cardinal multiphysics simulation using O(30) OpenMC k-eigenvalue solves. Note, this is a sixth core model that is constructed by making a full core and slicing it between two planes with a periodic BC applied.

I’m now running the same model (i.e. the same XML) but in two other scenarios: OpenMC depletion and Cardinal multiphysics search for keff. Both are now encountering the same problem as before with maximum events and a -nan batch. Here’s an example failure where it reports both lost particles and particles hitting the maximum number of events.

 WARNING: Could not find the cell containing particle 2343
 WARNING: Could not find the cell containing particle 2344
 WARNING: Particle 38527 underwent maximum number of events.
 ERROR: No fission sites banked on MPI rank 0
...

These simulations use more k-eigenvalue sovles than before: O(100) for depletion and more for multiphysics search. I bring this up because these simulations go further into the random number stream than my previous “successful” simulation. It seems like very rarely, particles are getting stuck between periodic boundaries without actually transporting further. I can’t envision another reason why particles would reach the event limit as there’s many chances to be absorbed in this reactor.

For the record, I am using an OpenMC version beyond the merged PR#3692. To test whether the issue was with my model or if the issue is due to the periodic BC, I re-ran both of the aforementioned simulations with the a reflective BC instead of a periodic one (all other details are identical). I’ve also tried doing openmc -g with my XML. Since the troublesome event happens much further down the random number stream, I don’t think a single standalone transport will show the same issue in geometry debug mode.

Both the depletion and the multiphysics search were able to complete without these errors when using a reflective BC. This isn’t fully “proof,” but it does support that there might still be something wrong with the periodic BC. While it appears to be an extremely rare event, at some point, a particle is getting into trouble. It could be specific to my model, but it appears to be some interaction of my model and the periodic BC. Any debugging advice would be appreciated! I might need to rebuild OpenMC with some kinds of print statements to diagnose further.

I can share the results of my openmc -g if needed. There were 746 cells with less than ten overlap checks, 402 of which had zero checks.

Just had an idea. I’m going to crank up the particles and let it run over the weekend to see if anything results from that.

I’ve done a bit more digging. I think that the issue isn’t with the periodic bc implementation, but I do think that it’s the interaction of the periodic BC with my geometry, specifically the TRISO lattice. Interestingly, when I did openmc -g with more particles, I did get one overlap

ERROR: Overlapping cells detected: 192215, 192214 on universe 32

Looking in my XML,

<cell id="192214" fill="3" region="-5" universe="32"/>
<cell id="192215" material="7" region="5" universe="32"/>

these cells corresponds to a cylinder filled with a universe of a TRISO lattice inside surface 5 and graphite outside. I wasn’t sure how region="-5" and region="5" could overlap, but what I think is happening is that a particle in the TRISO lattice hits the periodic boundary and enters the other side in a different material. For example, here’s a view of my reactor


and the corresponding place it maps too when crossing the periodic surface

As you can see, the TRISO lattices don’t actually line up as they should for a periodic boundary. I think this might be the source of the problem. I’m not sure exactly how in the code this would be recognized as an “overlap,” but if it were and that kind of “overlap” could lead to max events on a particle, it makes sense that this would stop being a problem when the boundary is changed from periodic to reflective.

I suspect that rotating the fuel compacts so that the TRISO lattice line up could be a possible solution. As a test, I homogenized the universes containing TRISO lattices and re-ran with periodic boundary conditions and the simulation was able to complete with no issues (203 total OpenMC k-eigenvalue solves). I think this suggests it’s really a geometry issue.

I’ve still not gotten to the bottom of this. The more tests I do, the more I think that something is not right with the periodic boundary condition implementation. To be absolutely sure about the source of my overlap, I removed all the TRISO in my model and replaced the Cell.fill in those cylinders with a regular material. Using this simpler XML, I ran openmc -g twice with only one difference: once with a periodic BC and once with a reflective.

In order to do a thorough overlap check, I increased to 5,000,000 particles per batch, 50 inactive and 150 active batches. Somewhere in batch 57, I get this error

       56/1    1.01606    1.01589 +/- 0.00012
 ERROR: Overlapping cells detected: 671, 670 on universe 346

Looking in my geometry.xml

 <cell id="670" material="2" region="-231" temperature="1133.65" universe="346"/>
 <cell id="671" material="2" region="231" temperature="1133.65" universe="346"/>
...
 <surface id="231" type="z-cylinder" coeffs="0.0 0.0 0.6"/>

Its the same case as before, an overlap between two regions that by definition should not overlap (i.e. inside a ZCylinder and outside a ZCylinder). Universe 346 occurs in my middle assembly’s upper reflector, which exists in a few places. Here’s a view from the plotter by cell. The middle assemblies are the lime green hexagons with a dark green cylinders and a red cylinder in the middle,


In the image, I’m hovering on the dark green half cylinder (cell 670 mentioned in the overlap check error), which also has many other instances in my model; namely the other full, dark green cylinders of the same color.

When I repeat this experiment and only change the surface’s boundary condition to reflective, cells 670 and 671 succeed in many, many overlap checks.

 ==================>     CELL OVERLAP CHECK SUMMARY     <===================

 Cell ID      No. Overlap Checks
...
      670         215074847
      671         501317116

This fact that only changing the BC causes an overlap to be detected makes me very suspicious of the periodic BC implementation. This behavior is consistent with my TRISO tests where reflective BC versions succeed while periodic versions do not. To me, the common denominator with failure is periodic BCs.

Without at least an example input and better yet a minimal example, it will be very hard to help you.

Here’s the XML I’ve been using for these tests. It’s about as simple as I can make it (no TRISO) where I know the problem is still occurring.
settings.xml (593 Bytes)
geometry.xml (200.7 KB)
materials.xml (5.7 KB)

I recompiled OpenMC to try and print a bit more info right before the fatal error in the overlap check and got some interesting info. According to the error message, the particle believes it’s in cell 670 and has detected that cell 671 “overlaps” with 670. Interestingly, the global coordinate system position and direction are as follows

Current global coordinate position r=(18.6115, 228.396, 70.5)
Current global coordinate direction u=(-0.79841, 0.635684, -0.0776792)

I’ve gone to this location in the plotter (hovering with my cursor) which you can see in the bottom right corner)


In the bottom left corner, you can see that this position is actually in cell 671! For some reason, OpenMC thinks it’s in cell 670 during the overlap check (from model::cells[p.coord(j).cell()]->id_) when it actually is not.

I believe this suggests that at some point, an event has caused the particle to think it’s in a different cell than it actually is in, causing an “artificial” overlap. I’m currently testing openmc -g again to see if this singular event is the only overlap by removing the fatal_error and letting the particle continue onwards to see if each subsequent event is considered an overlap or if it’s just this one time.

It is very hard to debug something that happens so infrequently.

I think it is possible that with some small probability a particle get lost.

The effect of changing boundary conditions might be equivalent to the effect of changing the seed.

I do not have sufficient computing power to debug this problem because of the very high statistics necessary.

I am also not that bothered by one particle getting lost out of so many.

I think the real problem is that openmc cannot recover from one particle getting lost (If I understood correctly it crashes the simulation)

To try to fix that we need a reproducible input and reproducible cross section library where the seed is chosen so that the particle get lost early in the simulation.

By the way, I found that openmc have a problem with overlap checking of spheres with tiny radii.

I also suspect this might be true for small cylinders (like in your problem).

@ligross, could you check if Increase numerical robustness of cylindrical and spherical distance to boundary checks by GuySten · Pull Request #3923 · openmc-dev/openmc · GitHub fixes your problems? I could not test your input because of insufficent computing power.

Thanks for looking into this and proposing a solution so quickly!

It is very hard to debug something that happens so infrequently.

Yes this has been really challenging to figure out, compounded by the long wait time until failure.

The effect of changing boundary conditions might be equivalent to the effect of changing the seed.

Interesting. I think I’d still expect to see a failure with the reflective case if the issue is due to lack of robustness with cylinders, but regardless, I will try to see if your PR has an impact.