Lattice universes inside a lattice universe is pretty much common in reactor design. Let’s say that we’re trying to simulate a full core of PWR like AP1000 or AP600. Each assemblies consist of rectangular fuel lattice of 17 x 17, and each core can load 100-200 assemblies.
If we run model.differentiate_depletable_mats, we’ll get ~30-40k different materials, for each fuel cell. Suddenly we need supercomputer because it will eat hundreds gigabytes of memory.
I think it would be very nice if we can limit the level of differentiation, so instead of differentiating to each cell, we can differentiate to a universe. It would be great if we can even group several cells of lattice that contain that universe with a material to further simplify the conditions (as if we homogenize the material of particular part of the reactor in that way).
(I’ll left it here, perhaps in the future I’ll try to attempt this, once I done my master studies and entering the PhD)
This is indeed why Monte Carlo methods are rarely used for full-core depletion simulations.
Production LWR analysis is essentially universe-based depletion on a homogenized assembly scale: with nodal methods, a single CPU can do these calculations in hours or even minutes. The assemblies’ lattice physics calculations themselves sometimes use what’s essentially universe-based depletion on a pin level, though these days they usually model the pins explicitly.
You are completely correct that if you want to deplete each fuel cell in a full core in OpenMC, that you will need a vast amount of memory and some grouping.
It would be great if we can even group several cells of lattice that contain that universe with a material to further simplify the conditions (as if we homogenize the material of particular part of the reactor in that way).
Sorry for the late reply, and thanks for the references. While I’m still researching for interesting way of grouping / homogenizing the fuel in full core, from last week i tried to be very simple; I separate the fuel materials according to this scheme:
It’s akin to the null homogenization in Dr. Boyd’s work, except I homogenized it even further by grouping the symmetrically-equivalent assemblies together (to force the symmetry, otherwise the result may be asymmetrical, like the case below)
Hey guys, I just wanted to point out that serpent has exactly this feature already. It’s implemented in a card called “div sep” where you can pass in the universe nesting depth you would like to subdivide at. A particularly useful case is where you’d like to assume identifical compositions in the nodules of a TRISO compact.
IMO, we should just build out the differentiate mats function a bit better. I assume there’s some loop in there over nesting levels where you’d just not split after a given depth. This would certainly be a useful feature.