Issues running depletion in parallel through a subprocess after importing the deplete module

Hi all,

I am having more trouble with my SaltProc code when trying to run OpenMC depletion in parallel. SaltProc creates copies of geometry, settings, material, and tally .xml files to a working directory, then runs OpenMC depletion via a script that is called via subprocess.run, with any MPI arguments prepending the script call, and the paths to those files as additional arguments (e.g. subprocess.run(['mpirun', '-N', '2', 'python', 'openmc_deplete.py', '--material', 'path/to/materials.xml', ...])).

Assuming all the paths are global, one can run this command in an interactive python session without any problems. However, in SaltProc, several components of OpenMC’s Python API are imported to, for example, collect material compositions before and after depletion.

It seems to be the case that (assuming the OpenMC API has been installed with mpi4py, and openmc itself is installed using MPI and parallel HDF5), when a component of the openmc.deplete module is imported that uses openmc.mpi.comm, the subprocess.run command simply not work, and fail with exit code 1 without any stderr or stdout info.

In my own testing, I’ve confirmed this to be the case both locally and on the cluster machine I am using (Sawtooth at INL).

In playing around with this, it seems to be a bigger issue related to using subprocess.run to call MPI after imporing the MPI communicator into python.

To reproduce this behavior:

  1. Follow the installation steps here
  2. Create a blank python script called test.py in your working directory.
  3. Open an interactive python session, and do the following commands:
import openmc.mpi
import subprocess
subprocess.run(['mpiexec', '-np', '2', 'python', 'test.py'])

You should get the following response:
(CompletedProcess(args=['mpiexec', '-np', '2', 'python', 'test.py'], returncode=1, stdout=b'', stderr=b''),)

You can try this using mpirun as well, it will result in the same outcome.

Software info:
OpenMC v0.13.3
OpenMPI v

In writing this up, I found this stackoverflow post which provided a workaround that appears to work, but I’m still testing it out.

I’d like to come up with a more robust solution that doesn’t rely on fixing the environment variables.

1 Like

Hi Olek,

I’m also attempting to run a parallel depletion simulation and was looking to the documentation and notebooks for examples, but I didn’t find the instructions to be the most user friendly. What I did find was this, which says to create your own MPI Intercommunicator and modify the member variable openmc.deplete.comm of type mpi4py.MPI.Comm. My first attempt to understand this data member was to print(openmc.deplete.comm), but the python gods laughed by printing

<openmc.dummy_comm.DummyCommunicator object at 0x7fc999a316c0> 

Usually, I specify mpi_args with something like

openmc.run(mpi_args=['mpiexec','-n','2'],threads=12)

I tried and failed with the following

operator = openmc.deplete.CoupledOperator(model,"./casl_chain.xml")
power = 225000 # W
time_steps = [30]*2 
integrator = openmc.deplete.CECMIntegrator(operator,time_steps,power=power,timestep_units='d')
integrator.integrate(mpi_args=['mpiexec','-n','2'],threads=12)

Does the depletion implementation prevent the above from being easily implemented (à la assume everything trivial to implement has already been implemented)? IMO this would be the most user-friendly way to specify parallel depletion arguments.

I was able to arrive at a solution after some GH repo lurking (ty @kkiesling ). It seems the way to get two ranks with 12 mpi threads per rank is to run

export OMP_NUM_THREADS=12
mpiexec -n 2 python run_depletion.py

I agree that it would be nice to specify this inside your script, as opposed to using environment variables. If it would be too laborious to allow openmc.deplete.abc.Integrator.integrate to accept MPI arguments, I think a documentation PR and/or an openmc-notebook PR would be nice to make it more clear on the best way to run parallel depletion.