Hi all,
I am having more trouble with my SaltProc code when trying to run OpenMC depletion in parallel. SaltProc creates copies of geometry, settings, material, and tally .xml
files to a working directory, then runs OpenMC depletion via a script that is called via subprocess.run
, with any MPI arguments prepending the script call, and the paths to those files as additional arguments (e.g. subprocess.run(['mpirun', '-N', '2', 'python', 'openmc_deplete.py', '--material', 'path/to/materials.xml', ...])
).
Assuming all the paths are global, one can run this command in an interactive python session without any problems. However, in SaltProc, several components of OpenMC’s Python API are imported to, for example, collect material compositions before and after depletion.
It seems to be the case that (assuming the OpenMC API has been installed with mpi4py, and openmc itself is installed using MPI and parallel HDF5), when a component of the openmc.deplete
module is imported that uses openmc.mpi.comm
, the subprocess.run
command simply not work, and fail with exit code 1 without any stderr
or stdout
info.
In my own testing, I’ve confirmed this to be the case both locally and on the cluster machine I am using (Sawtooth at INL).
In playing around with this, it seems to be a bigger issue related to using subprocess.run to call MPI after imporing the MPI communicator into python.
To reproduce this behavior:
- Follow the installation steps here
- Create a blank python script called
test.py
in your working directory. - Open an interactive python session, and do the following commands:
import openmc.mpi
import subprocess
subprocess.run(['mpiexec', '-np', '2', 'python', 'test.py'])
You should get the following response:
(CompletedProcess(args=['mpiexec', '-np', '2', 'python', 'test.py'], returncode=1, stdout=b'', stderr=b''),)
You can try this using mpirun
as well, it will result in the same outcome.
Software info:
OpenMC v0.13.3
OpenMPI v