Hi all,
I am currently trying to run an openmc simulation on a HPC. The HPC uses Slurm and an sbatch submission_script. Here I state nodes, environmental variables paths, conda environment activation etc. I then run mpirun python python_script.py (plus some other arguments but this is for simplicity).
In the python script I create variables to run different openmc simulations. This has been configured with mpi4py so that I can create the variables on the master node and then I would like to use the slave nodes to run a large particle number openmc simulation. So I am wondering if it is possible to run one simulation with a large number of particles over multiple nodes? If so, what is the best way to do this.
I could create each nodes to run x amount of particles and gather them all together at the end. Now in this format would I need to include the mpi_args arguements with openmc.run(). Or if I run it with the mpi_args does openmc to the hard work of collecting together the results once they have finished. Hopefully I know but I am unsure how it handles this process.
Also if there are any other best practice suggestions on how best to run openmc in this way it would be very helpful.
In short I am running a python script with an sbatch(bash file) and I would like to run one simulation over X number of nodes. I hope this makes sense.