Building Docker image for use with Singularity

Hi there,

I am currently looking into how to use OpenMC with Singularity for use in HPC applications. I have tried building a Docker image of OpenMC using the command:

docker build -t debian/openmc:latest https://github.com/openmc-dev/openmc.git#develop

However I obtain the following error:

Parallel HDF5 was detected, but the detected compiler, /usr/bin/c++, does not support MPI. An MPI-capable compiler must be used with parallel HDF5.

If anyone has any advice on how to solve this issue, that would be much appreciated! Additionally, once the Docker image has been created, am I right in assuming that simply pulling the Docker image will create a Singularity image that should be usable on HPC? Again, if anyone has any experience using OpenMC and Singularity in tandem then any guidance would be greatly appreciated!

Thanks!

Alternatively, I should add, does the command:

singularity pull docker://openmc/openmc:latest

create a singularity image of the latest Docker image of OpenMC with OpenMC pre-installed? I ask because I have tried this also, but after binding the folder containing the Python script I want to run to the filesystem of the singularity image and attempting to run it, I get a ModuleNotFoundError stating that there is no module named ‘openmc’? I may be missing something fundamental here so again any help would be great!

Thanks again!

I was able to reproduce the error you’re seeing with that Docker command but have to admit I’m very confused as to why it’s happening. Our Dockerfile explicitly sets -DOPENMC_USE_MPI=on, which should prevent that error (the error only happens if OPENMC_USE_MPI is off).

@Shimwell do you have any ideas on this?

@bakingbad Forgot to respond to your second question. I know next to nothing about singularity, but it looks like that command should be pulling the pre-built Docker images that are available from Docker Hub.

Hi @paulromano,

For whatever reason, it does not seem that either the singularity pull or singularity build commands actually produce a singularity container with OpenMC built.

To get around this and the issues described above, I have manually installed OpenMC within a singularity container. It seems to work fine when simply running from a shell within the container but when I attempt to run in parallel using SBATCH / Slurm, it begins to run a couple of batches before crashing with the error:

double free or corruption (out)

I have attached the full output for reference. As far as I can tell, this is an error relating to MPI processes, but this is very much not my area of expertise. Are there perhaps some default MPI settings in OpenMC that I may not be accounting for / considering?

Thanks,
Jack :slight_smile:

parallel_openmc.py (8.2 KB)

Quick update, I have tried changing the version of OpenMPI to match that in the singularity container, however now what happens is that several instances of OpenMC begin running, equal to the number of cores I requested, each with 10 OpenMP threads.

Just catching up on this thread.

Are you building the singularity container on your local computer then transferring it to the HPC cluster?

OpenMC in the container is linked against and uses the MPI version installation within the container but this also communicates with the MPI running on the host system so getting the correct version can be tricky.

Perhaps we should add a singularity definition (.def) file to the openmc repo to help out with this sort of activity. I think a .def file might be preferable instead of converting the dockerfile to iron out these sorts of errors.

Hi @Shimwell,

In this instance I decided to build the singularity container directly on the HPC cluster as my native desktop is not Linux, although if needed I could build it on a VM and then transfer.

I agree, finding the right version is difficult! If it would be possible to have a .def file for OpenMC, that would be incredibly useful! Please let me know if I can be of any help :slight_smile:

Thanks,
Jack

This usually happens if the mpiexec you’re using doesn’t match the version of MPI that was used to compile OpenMC.