How can I pass run arguments like ‘cwd’ and ‘threads’ when running OpenMC in the C API through openmc.lib?
I tried the context manager openmc.lib.run_in_memory but I couldn’t figure out the syntax of sending arguments correctly.
For example, I want to run openmc in memory batch-by-batch in a different directory using cwd=‘myfolder/’. Maybe something like:
with openmc.lib.run_in_memory(kwargs={'cwd':'myfolder/'}):
Thanks!
2 Likes
Hi @mkreher13. The arguments that are passed to init
via run_in_memory
are the same as those that one could specify on the command line. To see what’s possible, run openmc --help
from a command line, which gives you:
Usage: openmc [options] [directory]
Options:
-c, --volume Run in stochastic volume calculation mode
-g, --geometry-debug Run with geometry debugging on
-n, --particles Number of particles per generation
-p, --plot Run in plotting mode
-r, --restart Restart a previous run from a state point
or a particle restart file
-s, --threads Number of OpenMP threads
-t, --track Write tracks for all particles
-e, --event Run using event-based parallelism
-v, --version Show version information
-h, --help Show this message
If you want to change the number of threads and the working directory, this could be accomplished then with:
with openmc.lib.run_in_memory(args=['--threads', '16', 'myfolder']):
...
2 Likes
Follow-up question: how about MPI? I tried a few different things:
openmc.lib.init(args=[‘mpiexec’,’-n’,‘4’])
and
run_kwargs = {‘mpi_args’: [‘mpiexec’,’-n’,‘4’]}
openmc.lib.init(**run_kwargs)
But the first gave an MPI error, and the second gave an unexpected argument error for ‘mpi_args’. I’m further confused by the option to send an MPI intracommunicator like mpi4py. How does this come into play?
I am able to use MPI with regular OpenMC runs without the CAPI, so I know my parallel installation is configured correctly. I’d like to tag Shikhar Kumar but I’m not sure he is on this discourse page.
Thanks,
Miriam
1 Like
Sorry, I know this is a little confusing. Because openmc.run
just calls the openmc
executable under the hood, it’s very easy to instead call mpiexec ... openmc
which is why openmc.run
has an mpi_args
argument. For openmc.lib.init()
, we’re not calling a subprocess so there’s no way to instantiate new MPI processes (at least in a way that MPI can handle cleanly). Instead, you need to call your Python script with mpiexec
. When doing so, you also should use mpi4py
to get the communicator to pass to openmc.lib.init
. So, you could have a script called “run_openmc.py” with:
import openmc.lib
from mpi4py import MPI
openmc.lib.init(intracomm=MPI.COMM_WORLD)
openmc.lib.run()
openmc.lib.finalize()
and then call it with mpiexec -n N python run_openmc.py
.
1 Like
Ok thanks! That works with the downside that the entire python code must run in parallel instead of just the Monte Carlo run.