Hi all,
I am trying to run parallel depletion to test my MPI functionality. I believe a recent PR enabled serial HDF5 use with MPI-parallel OpenMC runs. This is quite convenient for modestly sized clusters like the one I’m running on.
In trying to run the pincell_depletion case, I set OMP_NUM_THREADS, and do “mpirun -n 12 python run_depletion.py”.
This seems to work fine until depletion reaction rates are written, when each MPI rank seems to emit this error message:
Traceback (most recent call last): File "run_depletion.py", line 95, in <module> integrator.integrate() File "/opt/openmc/gnu-dev/openmc/deplete/abc.py", line 881, in integrate p, self._i_res + i, proc_time) File "/opt/openmc/gnu-dev/openmc/deplete/results.py", line 493, in save results.export_to_hdf5("depletion_results.h5", step_ind) File "/opt/openmc/gnu-dev/openmc/deplete/results.py", line 215, in export_to_hdf5 with h5py.File(filename, **kwargs) as handle: File "/opt/anaconda3/3.7/lib/python3.7/site-packages/h5py/_hl/files.py", line 394, in __init__ swmr=swmr) File "/opt/anaconda3/3.7/lib/python3.7/site-packages/h5py/_hl/files.py", line 176, in make_fid fid = h5f.create(name, h5f.ACC_TRUNC, fapl=fapl, fcpl=fcpl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 105, in h5py.h5f.create OSError: Unable to create file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')
What is the cause of this? Should the call to integrator.integrate somehow tell it how many MPI processes it should use like with
openmc.run(mpi_args=['mpiexec', '-n', '32'])
?