Building Docker image for use with Singularity

Hi there,

I am currently looking into how to use OpenMC with Singularity for use in HPC applications. I have tried building a Docker image of OpenMC using the command:

docker build -t debian/openmc:latest https://github.com/openmc-dev/openmc.git#develop

However I obtain the following error:

Parallel HDF5 was detected, but the detected compiler, /usr/bin/c++, does not support MPI. An MPI-capable compiler must be used with parallel HDF5.

If anyone has any advice on how to solve this issue, that would be much appreciated! Additionally, once the Docker image has been created, am I right in assuming that simply pulling the Docker image will create a Singularity image that should be usable on HPC? Again, if anyone has any experience using OpenMC and Singularity in tandem then any guidance would be greatly appreciated!

Thanks!

Alternatively, I should add, does the command:

singularity pull docker://openmc/openmc:latest

create a singularity image of the latest Docker image of OpenMC with OpenMC pre-installed? I ask because I have tried this also, but after binding the folder containing the Python script I want to run to the filesystem of the singularity image and attempting to run it, I get a ModuleNotFoundError stating that there is no module named ‘openmc’? I may be missing something fundamental here so again any help would be great!

Thanks again!

I was able to reproduce the error you’re seeing with that Docker command but have to admit I’m very confused as to why it’s happening. Our Dockerfile explicitly sets -DOPENMC_USE_MPI=on, which should prevent that error (the error only happens if OPENMC_USE_MPI is off).

@Shimwell do you have any ideas on this?

@bakingbad Forgot to respond to your second question. I know next to nothing about singularity, but it looks like that command should be pulling the pre-built Docker images that are available from Docker Hub.

Hi @paulromano,

For whatever reason, it does not seem that either the singularity pull or singularity build commands actually produce a singularity container with OpenMC built.

To get around this and the issues described above, I have manually installed OpenMC within a singularity container. It seems to work fine when simply running from a shell within the container but when I attempt to run in parallel using SBATCH / Slurm, it begins to run a couple of batches before crashing with the error:

double free or corruption (out)

I have attached the full output for reference. As far as I can tell, this is an error relating to MPI processes, but this is very much not my area of expertise. Are there perhaps some default MPI settings in OpenMC that I may not be accounting for / considering?

Thanks,
Jack :slight_smile:

parallel_openmc.py (8.2 KB)

Quick update, I have tried changing the version of OpenMPI to match that in the singularity container, however now what happens is that several instances of OpenMC begin running, equal to the number of cores I requested, each with 10 OpenMP threads.

Just catching up on this thread.

Are you building the singularity container on your local computer then transferring it to the HPC cluster?

OpenMC in the container is linked against and uses the MPI version installation within the container but this also communicates with the MPI running on the host system so getting the correct version can be tricky.

Perhaps we should add a singularity definition (.def) file to the openmc repo to help out with this sort of activity. I think a .def file might be preferable instead of converting the dockerfile to iron out these sorts of errors.

Hi @Shimwell,

In this instance I decided to build the singularity container directly on the HPC cluster as my native desktop is not Linux, although if needed I could build it on a VM and then transfer.

I agree, finding the right version is difficult! If it would be possible to have a .def file for OpenMC, that would be incredibly useful! Please let me know if I can be of any help :slight_smile:

Thanks,
Jack

This usually happens if the mpiexec you’re using doesn’t match the version of MPI that was used to compile OpenMC.

Did you ever manage to resolve this issue? I am attempting to set up a Docker Container running OpenMC with DAGMC on my local machuine using

docker build -t openmc_dagmc_libmesh --build-arg build_dagmc=on --build-arg build_libmesh=on --build-arg compile_cores=8 .

but I am running into the same issue as above:
Parallel HDF5 was detected, but the detected compiler, /usr/bin/c++, does not support MPI. An MPI-capable compiler must be used with parallel HDF5.

The only way I can find to build it is to remove the -DHDF5_PREFER_PARALLEL=on option in the Dockerfile, however when attempting to run a script that utilizes DAGMC (in this case the Using CAD-Based Geometries from the Jupyter Notebook examples), it isn’t configured to run with DAGMC and exits with RuntimeError: DAGMC Universes are present but OpenMC was not configuredwith DAGMC

I have run into the same issue when using the OpenMC dockerfile as well, however using @Shimwell 's dokerfile for building the neutronics workshop I didn’t have this issue. Here’s the link to that repository: https://github.com/fusion-energy/neutronics-workshop

The MPI version on the Host must be newer than the version in the container

2 Likes

Perhaps we could make MPI an optional inclusion when building the dockerfile

I’m having issues updating my version of MPI on my Ubuntu system… is there any way around this as if its building in a docker container it shouldn’t matter what the host system has installed?

Yeah building the container I don’t believe should impact the system MPI, however running that container will of course. To my knowledge, however that should be only running mpi programs not running containers building MPI programs.

I could you prepend the build arguments for cmake with but leave all the HDF5 options alone

CC=mpicc CXX=mpicxx cmake .. etc

Does that do the trick?

This now works to build the docker container but now gives the error:

DAGMC Universes are present but OpenMC was not configuredwith DAGMC