Is there a way to limit memory usage?

Hello!

I’m running this on a computing cluster. My input files are quite large (TRISO particles make my geometry.xml file around 8 million lines long) and the memory I request is always too little (I’ve gone up to 128gb).

Is there any way to specify that openmc should try to use less memory or am I screwed?

Thanks so much!
Percy

1 Like

Hi,

I use very simple models, and in my calculations memory usage depend only from the number of particles in butch.
You can try to decrease that parameter.
Good luck!

Hi Percy,

If you are using the ‘develop’ branch of OpenMC, you may want to try the following:

settings = openmc.Settings()
settings.material_cell_offsets = False

This is a new option that has been introduced since the last release (0.11).

Best,
Paul

Are you using OpenMP parallelism? With OpenMP, each thread uses shared memory to store continuous energy cross sections and tally results. That can seriously cut down the memory cost.

If you’re running on multiple nodes, you can set it up so that there is one MPI process on each node and then OpenMP threading to use all the processor cores on each node. It’s a little tricky to set up, but we can give you some tips if you want to go that route.

I am not but since I have the option to use multiple nodes I think that’s probably the way to go although since I’m a newbie I haven’t tried it yet.

I was looking at the docs and it looks like I’ll have to recompile with export CXX=mpicxx? The cluster already has openmpi as an available module so I’m good on that count. I’m only able to install things in a conda environment, is that okay?

Also if I got the compilation right, how can I make openmc aware of the other nodes I’ve requested?

Any tips are definitely welcome! I’m really surprised at how responsive this community is, thanks so much! Maybe someday I’ll be able to pay it forward.

Best wishes,
Percy

Thanks so much it’s running now! What is material_cell_offsets? I can’t find it in the manual.

Hope you are having a good day!
Percy

Thanks for the input! I definitely think I will work with simpler models in the future.

Hope you are well,
Percy

Hi Leo (sorry if I got your name wrong before),

We have a new tally filter called a cell instance filter that allows you to tally individual instances of cells that are repeated in the geometry (as would happen with a lattice). This feature relies on setting up offset tables that can potentially require quite a bit of memory, so the option I gave you (settings.material_cell_offsets = False) just turns off the automatic generation of those tables. It would only affect you if you were going to use a cell instance filter in your problem.

Best,
Paul

Glad you got it working with the material_cell_offsets option. I’ll go ahead and give some details about MPI and OpenMP for future reference:

Confusingly OpenMP and OpenMPI are two different things.

OpenMPI (one implementation of the MPI standard) is categorized as distributed-memory parallelism. Each MPI process uses its own chunk of memory and doesn’t share with other MPI processes. You need to use MPI to spread computations across multiple nodes of a cluster because each node cannot directly access the memory of other nodes. As you say, you need to compile OpenMC with export CXX=mpicxx in order to use MPI.

OpenMP is a kind of shared-memory parallelism. All the OpenMP threads have to live on just one machine; they can’t be split across nodes in a cluster like MPI. The advantage is that since they are all on one machine, they can share the memory on that machine. So you only have to load one set of cross sections and one description of the geometry while still computing with however many threads. Conversely, each MPI process makes a redundant copy of that data.

It’s also possible to do both simultaneously. So if you have 2 nodes each with 32 CPU cores, you can run with 2 MPI processes and 32 OpenMP threads to get the advantages of shared-memory parallelism while still computing on multiple nodes. You have to be careful though to make sure the call to mpiexec/mpirun launches those two processes on the two different nodes instead of launching them both on one node. The best way that I know to ensure that happens is to write a script that gets the list of nodes your job is running on (via the environment variable SLURM_JOB_NODELIST for slurm or PBS_NODEFILE for pbs), removes duplicated values from that list, and passes it to mpiexec/mpirun with the -machinefile flag.

Hi,dear Percy_Harris

I 'm also working on TRISO fuel.When my input file reaches one hundred thousand lines, the compution will become very difficult. I wonder how you make it run ? And how is the computational efficiency ?

Thanks so much!
Skywalker.